Screen door Bahtinov Mask

August 24th, 2014 No comments

After making and using the Paper Bahtinov Mask for awhile, I decided to try to make a better, cheaper, and perhaps more robust mask. The original Paper Bahtinov Mask still works, however, it’s still a little too delicate for my tastes, and I would rather have a more resilient foldable solution for the times when I go on hiking or camping trips in the wilderness where compactness is important.

When mulling it over, there were several key considerations that lead to the idea of using screen material as a base to build a Bahtinov Mask. The first is that screen material is fairly robust and cloth-like and so it’s relatively easy to store and it’s decently strong. The second is that the unique features of the traditional Bahtinov mask rely on the existence of the particular set of parallel slits in each direction. It’s the obstruction of light in a particular orientation which creates the strong signal (“light streak” that can be seen when zoomed in on the view finder.

By the same line of reasoning, using a screen as a base in theory shouldn’t have much effect on the diffraction pattern, other than reducing incoming light by a tad bit. The reasoning is that the screen pattern obstructs light “equally” in horizontal and vertical directions. On average if the obstruction is small, the directionality of the bands is somewhat mitigated. If we align the portion of the Bahtinov mask that corresponds to the “large streak” (the largest portion that goes in a vertical or horizontal direction) with the screen door, we can ensure that that signal doesn’t get attenuated. The perpendicular orientation of the screen will then attenuate the symmetric portions of the mask equally; therefore the overall effect is that a stronger middle streak should occur., but the overall characteristics of the Bahtinov mask are retained.

With that in mind, I decided to use a sheet of extra screen I had laying around and use painters tape ask the masking material to create the pattern. I did not generate a mask this time. From the post on the Paper Bahtinov Mask, the important insight about the Bahtinov Mask is that the symmetric smaller portions remain parallel and symmetric to create the banding patterns that yield the diagonal light streaks. Therefore, I chose to keep a “slope” for each of the slanted portions that was easy to eyeball, and looked aesthetically pleasing, which ended up being a 3:1 vertical to horizontal ratio.

(Note: I understand of course that the angle of the smaller symmetric portions affects the angle of the diagonal lines, but as long as it’s visually appealing – I’m not using a computer to aid my focusing – I think it will be more than sufficient. The angle doesn’t affect the construction anyways.)

Construction

The construction is relatively simple. First trace a circle with a sharpie or some other marking utensil that covers the appropriate mask area. Next decide how thin you want each of the slots and bands of the mask. In this trial, I chose 2mm, because I can cut fairly straight slivers of painters tape at that accuracy, and because the more slots I have, hopefully the better signal I can get. The screen material was also spaced at roughly 2mm.

Start of construction

Start of construction

After marking the area which I want to cover and then using a slightly thicker piece of tape (4mm) to signify two halves, I placed a third vertical 2mm band over the middle to section the smaller symmetrical diagonal portions. I then proceeded to mark and cut 2mm long bands of painters tape and then applied them to the screen material. Diagonal bands were placed at a 3:1 vertical to horizontal ratio, and the large bottom portion contained vertical bands. The diagonal bands were more or less “eyeballed” at an even spacing and checked with symmetry. The vertical bands were placed 2mm (one screen grid spacing) apart. The mask took about 20-30 minutes to cut and piece together.

Cutting the bands made of painters tape

Cutting the bands made of painters tape

Mask used for the demosntration

Mask used for the demosntration

Demonstration Test

The Screen door Bahtinov Mask is attached to the camera by centering the mask then folding the remaining material around the lens. A rubber band or temporary zip-tie is used to hold the screen in place while one focuses manually. After focus is achieved, the zip ties or rubber band is removed.

The following pictures show the mask in action. In this test I unfortunately used a headlamp that has two LEDs thus if you zoom in you will see not one, but two sets of light streaks. The pictures were taken from a chair with a 2 second delay.

original image

original image

focused image

focused image

And the in-focus image.

focused image

focused image

After usage, I noticed that some of the bands of painter tape weren’t reinforced onto the screen too strongly, so I added some thicker bands of painters tape on the periphery as well as in the main horizontal and center vertical band

Conclusion

As you can see here, the Screen door Bahtinov Mask seems to work fairly well. I’d warrant to say it works as well as my Paper Bahtinov Mask which I used for a few nice panoramas at Sequoia National Park. Here’s one of my 500px photos where I used the Paper Bahtinov Mask for focusing.

There are definitely a few improvements I could make. Placing painter’s tape on a screen naturally evokes thoughts about screen printing. I don’t have a lot of experience in this area, and you would have to go through the screen printing masking and etching process for every few Bahtinov masks you want to make. A similar idea that I am proposing is to 3d print a relatively large yet fine Bahtinov Mask at a thin thickness (say 4-5mm). Place a screen over the thin 3d printed mask. Next use a spray on plastic like PlastiDip and spray the mask several times. Replace the sacrificial spray base material until a thin layer of plastic forms atop the screen which is atop the Bahtinov mask. Lastly, peal the screen from the mask, and hopefully one can achieve a beautifully opaque bahtinov mask. If one prints a large 3d mask that covers up to 77-80mm, that should pretty much cover most camera lenses. Making smaller lenses is simple, as the large mask can be reused. One simply covers part of the screen such that a smaller portion of the mask and screen are exposed to Plastidip. If anyone tries this out, please let me know if it works.

Paper Bahtinov Mask

August 24th, 2014 No comments

Lately, I’ve been getting into astrophotography, and so I’ve been spending several hours every month or so outside at night. The thing about astrophotography is that you have to be somewhat familiar with identifying constellations and then on top of that you have to be able to master the technicalities of acquiring enough light in the proper fashion to produce acceptable and interesting photos.

Like all beginners, I made the mistake of taking a bunch of pictures in a wonderful place (Lone Pine, CA) where the light pollution is almost non-existent during a new moon for 2 hours and I neglected to really check the most important thing, which is to CHECK the FOCUS!.

Unfortunately, as you may know, I am using a kit auto-focusing zoom lens (SEL1855) with my NEX 5R. The focus ring is manual (which is great for astrophotography) but it has no markers, because the focus ring is not a manual ring! It’s more or less a potentiometer used to tell the motor how much to change the focus, and so it spins around indefinitely. As you can tell, this can make focusing a tad harder, as you can never turn the ring until it stops at infinity or more than infinity focus. Also, because I have a NEX 5R it doesn’t tell you anywhere on the camera, what your approximate focus distance is. Even if it did, astrophotography is relatively unforgiving when it comes to nailing down the focus.

Reading up on focus methods for astrophotography brought up a nifty diffraction mask called the Bahtinov mask. It was invented by Pavel Bahtinov (although I can’t really find too much about this guy) for the purpose of accurate focusing of telescopes. You can read more about it on wikipedia, but the the diffraction pattern creates 3 “streaks” of light that intersect each other (although not necessarily at the same location) are emitted when you zoom in and focus on a bright light source at night with the mask attached. When the image is out of focus, the intersection point will not be in the middle. The convenience of the Bahtinov mask is because the mask is symmetric about one axis, the two diagonal streaks are symmetrical. The position of the “middle” streak can then be judged quite easily by the human eye. When the image is in focus, the middle streak should be exactly in between the two diagonal streaks and all 3 should intersect at one point!

Mask Creation

There are several ways to obtain a mask. The simplest way (although perhaps the most expensive) is to order one online. The second is to use the free Bahtinov Focusing Mask Generator provided by astrojargon. However, I could never get it to properly work given the clear aperture settings of my tiny MILC NEX 5R. It seems the mask generator was designed for “actual telescopes”. If you scrounge around online, you’ll also see that people have built their own custom bahtinov masks using spare pieces of plastic, wood, and metal.

Also online, there have been mentions of creating a simpler “Y” mask. I created one, but it didn’t seemed a little more finicky and a little harder to setup, so since I don’t really understand it, and it didn’t really work for me, I decided to just keep myself to the standard Bahtinov Mask.

However, being an engineer and having a penchant for what my friends refer to as “ghetto engineering”, I decided to make one out of paper. After talking to my friend Daniel who is finishing up his Ph.D. in lasers and optics at UCLA, about how much construction precision actually matters for a diffraction mask like this one, the overall conclusion was, that as long as the lines are fairly parallel, it should create a relatively strong signal with which the Bahtinov focusing effect should be good enough. The construction precision overall shouldn’t be that important as long as the average orientation of the mask slits are accurate enough. Therefore, I decided to first try out a bahtinov mask made of just paper and paper based products I had lying around at home. I pretty much eye-balled the mask holder construction, so it’s definitely not pretty.

Cutting the Mask

The first step was to generate a mask that looked possible to cut and didn’t require too much work on astrojargon. So I basically generated a large template that looked good and then shrunk it down to a diameter that is slightly larger than my 49mm lenses.

Cutting out the mask.

Cutting out the mask.

I taped the mask to the front page of a spare musician catalog (I actually like the retailer, I just haven’t had the need to buy another instrument recently, and why let a perfectly good piece of paper go to waste!). Next I pulled a trusty X-acto knife and started scoring away with a ruler. The mask probably took 30-40 minutes to cut out.

Mounting the Mask

After cutting the mask, I decided to create a mask holder out of a manilla folder I had that was already slightly bent on one side. Using a ruler, I created a flange which I then cut into segments to create a crown with which to hold the paper Bahtinov Mask.

Creating the flange for the crown.

Creating the flange for the crown.

Separating the flange for the crown

Separating the flange for the crown

I also cut out a larger cardboard ring to mount the mask onto. The ring also serves as something sturdy for me to attach the flanges of the crown to also.

Reinforcement ring

Reinforcement ring

Using a bit of Elmer’s rubber cement (just in case I needed to make adjustments) I assembled the paper bahtinov mask.

Assembled mask

Assembled mask

Top view of assembled mask

Top view of assembled mask

Going out on a test run revealed a few things. The first is that it seemed that I couldn’t see the 3 light streaks, but instead saw 3 images of the light I was focusing on. Aligning all 3 images yielded an very in-focus image. I later found out that the mask actually works as prescribed. However on the LCD viewfinder on the NEX 5R, what is initially displayed is not 1:1 pixel density. The light streaks are actually small, and so I need to zoom into 4.3x or 9.6x to see the 3 light streak focusing phenomenon.

The other thing I noticed was that I had a hard time rotating the focus ring. On the SEL1855, the focus ring is the front  ring closer to the lens, and the zoom ring is behind that. The crown of the mask holder covers up the focus ring. As a compromise in order to retain some semblance of material strength, I cut two slots into the crown so I could rotate the focus ring while allowing the mask to grip the lens fairly well.

SEL1855 Adjusted Paper Bahtinov Mask

SEL1855 Adjusted Paper Bahtinov Mask

Conclusion

Overall, I’d call this work of “ghetto engineering” a success. It gives me confidence in nailing the focus for photos at night and it helps me visually determine how out-of-focus I am. When using it with really dark skies, I’ve noticed that it’s hard to find bright stars to use this on. One trick for focusing with dark skies that I read about online is to walk 40-100ft away and leave a bright LED flashlight. Then walk back and then proceed to focus on the light with the mask on. I’ve found this to work quite well especially because I’m shooting slightly wide (about 24mm equivalent).

Categories: Fun Tags: , , , ,

Legacy Lenses (glass) and Adobe Lens Profile Creator Lens Profile Creation Analysis

August 24th, 2014 No comments

So I recently came across my father’s old stash of analog SLR cameras and it occurred to me that I could probably reuse those lenses (Olympus OM 50mm f1.4 and Helios 44-2 58mm f2.0) on my NEX 5R. So after polishing up the lenses, I thought they might be useable and searched on Flickr for a similar setup. To my surprise there were quite a few sample images from the NEX 5 + OM 50mm/f1.4 and NEX 5+ Helios 44-2 combo. The Flickr samples seemed decent so I picked up a few Fotasys adapters which were really cheap (10-15) a piece and they seem to be built solidly.

Probably the only issue I have with them is that neither of the adapters really “locks” in the lens. Occasionally I can feel the OM 50mm and Helios 44-2 shift a bit at which I tighten it back on to adapter. Generally it doesn’t happen, and when the OM 50mm I didn’t have any problems with it slipping or falling off.

Here are a few sample images from the OM 50 f1.4.

Mount Diablo Summit Trail (OM 50mm f1.4)

Mount Diablo Summit Trail (OM 50mm f1.4)

OM 50mm f1.4

OM 50mm f1.4

So as you can see the pictures are decently sharp, and the bokeh is pretty decent. These lenses retail for $50-$70 on Ebay. They aren’t expensive by any margin, but are decent fast manual prime lenses. Of course part of the beauty of these lenses is that they produce their own “style” of images and there’s something fun about using old analog SLR gear on a new digital MILC. The quirky combination works pretty well with the focus peaking on the NEX system (although not perfect).

As an Adobe Lightroom user, I’ve also been doing a lot of post processing on my images. I realized that it’s just part of photography (the digital equivalent of darkroom work) and that it’s one thing to take good images, and it’s another to enhance them to make them awesome images. Lightroom has some preset lens profiles that allow you to correct distortion, vignetting, and chromatic aberration (silhouettes and “weird” glare around objects) in Lightroom automatically. I installed the Adobe Lens Profile Downloader to see if the OM 50 f1.4 or the Helios were available, and of course they weren’t!

Adobe Lens Profile Creator

It turns out Adobe also has provided the Adobe Lens Profile Creator (ALPC) program. This program comes with calibration charts (grids) that I’ll explain in a moment, as well as some sparse documentation about how to operate the simple ALPC program. As it turns out, the program is “simple” to operate, but does not “simply” work all the time. It utilizes its own algorithms and such and there are some caveats and related trickery in getting ALPC to spit out a lens profile. In fact, ALPC confounded me for 2-3 days, before I got a working profile (calibration for one f-stop), and then an extra 2 days of experimentation to finally get everything working in one go. The rest of this blog post will delve into the details of how one creates a basic and complete lens profile for manual primes. I’ll also give some observations as to how I think ALPC is working and the most efficient way to shoot the calibration photos, and also show some workarounds I’ve tested in creating different calibration charts conveniently for proper lens calibration at a range of focal lengths. I assume these observations and work around will work equally well on zoom and wide angle lenses, because it’s all related to how I believe ALPC is programmed and what it’s trying to do mathematically at a general hand-wavy level.

So to begin I’ll describe the basic procedure summarized from the ALPC documentation. It does a decent job of describing the process, but leaves some details for you to figure out. I will try to clarify as many details as I can with some pictures, so as to hopefully help others avoid the frustration I encountered with calibrating the lenses. I mean – does anyone really enjoy generating lens calibration profiles? Probably not! But if everyone could easily (and yes, everyone can with ALPC) generate a lens profile for their lens, it would collectively save people a lot of time, and also foster further community within photography.

The basic procedure is to shoot a set of 9 images at a given setting: exposure, aperture, focus distance, and focal length. Each of the pictures should be in focus. You can import these photos and ALPC will go through each set at a particular focus distance and create a lens distortion profile based on the information. Since distortion varies based on focal length for zoom lenses, you would have to redo these sets for different focal length distances also.

ALPC Issues and possible workarounds

We’ve finally reached the portion of the blog post where I bring in the technical specifics, hackery, and process optimization together. We’ll start by looking at some of the issues and frustrations I found with ALPC.

Howto shoot proper calibration photo sets and fix chart printing errors

One of my initial frustrations was with the calibration charts. Initially the grids seemed a little arbitrary, but I will explain what the calibration information means. Each calibration chart label describes how many Y x X squares are on the chart. ALPC doesn’t look at the outer grid for calibrating the lens profile information. It looks at the points that are within and not on the border of the grid. This means that if there are 13 x 19 squares, that it will be looking for 11x 17 points = 187 points. Next, the label at the bottom describes how many points each square. This is related to the size of each square. A point is 1/72th of an inch. Smaller sized squares will help generate more points which will help create a more accurate lens distortion calibration profile.

Some of my frustrations were related to available calibration chart size, and also related to printing the calibration charts on 8.5×11 US letter sized paper using a laser printer. To start, if your laser printer occasionally skips over a specific line (prints it lightly) or misses a few specks here and there, ALPC will fail to find some points and mark the image as invalid. After studying which points ALPC missed (it will highlight the grid points it recognizes in yellow), I realized it was due to the missing white specks. The same occurred for a line of lightly colored printing. The solution was as simple as finding a black ball point pen and filling in the lightly-printed or white speck portions.

A separate issue is related to calibration chart size, which in turn is related to ALPC’s ability to detect geometric distortion. More on geometric distortion first, and then to calibration chart size issues which are related to different focus distances.

After correctly “fixing” my slightly mis-printed 8.5×11 calibration charts, ALPC would find all of the points. However, it would give an ambiguous error saying that there wasn’t enough variation in the images. After searching online, I couldn’t really find a good answer. As it turns out, the error is correct. In order to generate a geometric lens correction profile and proper vignetting profile, you ideally want to test the maximum distortion your lens will give you. The obvious way to do this is to compare the edges of your lens to the center, where distortion is the least. This gives you 9 obvious quadrants for imaging: the center and edges. In order to get the maximum distortion, the solution is to make sure the calibration chart (the grid portion, not including the labeling) is smaller than 1/9th of the photographic picture frame. This allows ALPC to extrapolate roughly how much distortion occurs by looking at how the grid points warp when you are not shooting the grid in the center. The documentation specifies this as a passing point, but I’d like instead to rephrase their suggestion as a necessary point. The calibration grid NEEDS to be AT LEAST 1/9th of the picture frame OR smaller. The grid doesn’t have to be EXACTLY 1/9th, as the documentation, seemed to imply. It needs to be 1/9th at the largest, but should be no bigger!

If you think about it, having a calibration grid that is smaller will still test the distortion of the lens effectively, just that there are less grid points to average the geometric and vignetting distortion parameters over, in terms of accuracy. However, you will get the same maximum distortion with the same settings whether your grid only has 5×9 points or 13×19 assuming the grid is close to the edge of the lens where distortion is obviously the greatest. The number of points just increases your accuracy, it doesn’t necessarily affect the correctness of your distortion!

To be honest, I’m can’t really think of a good reason as to why having grids that cover larger areas (over 1/9th) doesn’t work. Perhaps the algorithm implementation they are using relies on having little to no overlap. In that case, it is a weakness of the algorithm implementation rather than an issue with lens profile algorithms.

Easy quick and dirty custom-sized calibration charts

In any case, after both “fixing” calibration chart printing errors and making sure the grid is at most 1/9th the area or less allowed me to generate accurate profiles without any errors with ALPC. However, the focus distance even in the case of a 50 and 58mm prime is limited given one 13×19 54 point chart that fits on an 8.5×11 in piece of paper. Since I didn’t want to generate larger or smaller charts, I printing a second copy of the 13×19 and then set the camera up to focus on the minimal focus distance of my camera. I took a picture and then determined the dimensions of a grid that would have black squares at the corners (you’ll notice this in all the calibration charts) and would fit in 1/9th the area of the camera frame.

From my explanation of calibration charts before, you’ll realize that the only parameter that changes is the Y x X grid parameters. The size of the square (point (pt.) size) remains the same. After noting this, the first thing I tried was to use blue painters tape and cover up over the parts of the grid I didn’t want ALPC to recognize. I cut the grid such that painter’s tape width would easily cover up the unwanted squares. To my surprise, ALPC did not recognize any points on any of the images. My observation and reasoning behind this, is that ALPC’s algorithm to locate grids actually relies on having a white border around your grid. After locating a Y x X number of squares, it then goes in and looks for continuous black and white segments to identify grid points. This therefore explains the issues with printing errors and white specks. It confuses the algorithmic part which looks for grid points. However, the border is what determines where the grid is!

With this hypothesis, I proceeded to use a second copy of the same chart. This time I cut the grid in a specific way, so as to allow it to be taped to the wall with painters tap, without necessitating, creating custom calibration charts which you always have the option of doing. The following is a simple illustration of how one can cut a calibration chart out of a larger calibration chart and ALPC will recognize this chart properly.

Quick ALPC custom chart hack

Quick ALPC custom chart hack

As you can see, the resulting chart is a 5 x 9 36 pt chart created from a from a 13 x 19 36 pt chart. The corners of the grid are still black squares, but ALPC is not bothered by the extra white squares attached to the grid.

(Note: I did not test what would happen if my background isn’t white. I used a white wall for my calibration pictures, so obviously this could be a problem. The solution however is trivially simple. Just mount this grid on a 8.5×11 sized white piece of paper.)

ALPC calibration set labeling and EXIF fun

After this beautiful, elegant, and simple “hack”, ALPC was able to figure out the lens calibration parameters necessary for the images. I was able to generate lens profiles and append different focus distances to the same file.

The last issue is related to ALPC lens profile automatic naming conventions. Obvously on my fully manual legacy film primes from the late 70s,  my Sony NEX 5R doesn’t know what to put for Lens Make and Model in the EXIF information. Luckily exiftool is pretty well written. I’ve created a simple python script that will add the EXIF tool to all the JPG or all the DNG files in a specific directory. Since Sony NEX5R outputs Raw (.ARW) and JPEG, I had to use Adobe DNG converter to convert all the raw files to DNG and then apply the exif tool modifications. The EXIF script I wrote allows you to simply add in the Focal Length, the focus distance (subject distance) and the aperture value along with a string the lens name. This is pretty much all the ALPC program will read from the EXIF of the pictures. Without this information, your profiles will have “undefined” for each of the focus distance groups. Importing pictures with the EXIF modified for each picture using the above script, will result in properly populated focus distance group labels. However, the Lens Make may not be properly populated.

The reason for this error, is because ALPC and Lightroom, apparently only support a limited list of Lens Makers. While the list is fairly expansive, it doesn’t cover everything. By this I mean, Russian branded Helios lenses. Olympus was supported, and so ALPC will properly populate my Olympus OM 50mm f1.4. However the Lens Make for my Helios 58mm f2 was left blank. When creating the profile of for the lens, the resulting lens profile (.lcp) file is actually an XML formatted file. Looking in the file, you will see the lens distortion parameter values and so forth. You can also view and edit the Lens Make and Creator website and so forth. What you will see, however, is that even if you modify the Lens Make later using ALPC, even before generating the lens profiles, the Lens Make will default to your Camera Make, which in my case is Sony. Now changing the XML to describe Helios didn’t show up in Lightroom, therefore, in the end I had to stick with Sony Make, and Helios as the name of the lens and the lens profile.

Here’s the link to my ALPC_modify_EXIF.py script. It is a companion script for exiftool to modify a folder of 9 images that make up a calibration set. Simply pass the folder path and filetype (.DNG or .JPG) for the corresponding filetype for your calibration set, and the script will prompt you for Focal Length, Aperture, Subject Distance (or Focus Distance) and the Lens Model Name.

Example:

python ALPC_modify_EXIF.py [calibration set location] [dng or JPG]

Conclusion

This concludes my somewhat exhaustive study on ALPC and lens profile generation. Hopefully this has been helpful to other legacy and perhaps telescope + camera photographers. An observation I left out, is that ALPC seems to suffer from I/O bottlenecks especially when reading large number of DNG files, when performing calibration with several apertures and focus distances. Perhaps in the future, I will write a GPU/OpenCL accelerated version of geometric, vignette, and chromatic aberration lens profile. I may create a lens generator that is less picky about the size of the pictures, one that works faster, and one that is more resilient, even given a lower number of pictures. However, overall, I am satisfied with the overall lens profile correction results, and I can at least make due with ALPC as it exists. I have been happily shooting with my legacy lenses knowing that I can fix the distortion properly in Lightroom with the click of a button.

ASrock Z77 Extreme4 – Hackintosh (10.8.2) log

December 19th, 2012 2 comments

It turns out making this machine into a hackintosh is not as simple as it seems, especially for a first time hackintosher. I tried to initially include a GTX460 in the machine while installing, and while it did work I had issues getting Intel HD4000 Graphics working properly (However, I was able to get the Nvidia GTX460 output working surprisingly.) I also tried several configurations of ACPI hacking and decided after installing os x about 40 times that I’d just go back to using EasyBeast.

Computer Specs:
ASrock Z77 Extreme 4
I5 3570K
8 GB RAM
ASrock Z77 Extreme 4 Bios 2.20 version

The system was brand new and I wanted to triple boot in the future. I don’t have a Windows License key and I don’t know which version of Windows I would like, so I will skip that for the moment.

Steps:
1. Create Unibeast USB bootable drive. Remember to completely erase the disk in Disk Utility and then create the Disk. Overwriting on the same partition does not seem to actually erase all the files in the directory. Also later on do not leave any DSDT.aml files in the root location. It seems zip files are fine though.
2. Download Multibeast and grab the Broadcom kext (driver) patch from (http://www.osx86.net/view/3266-asrock_z77_extreme4_dsdt_bios_p2.00.html) The files include the networking and DSDT file, but here because I’m using Easybeast we can ignore the DSDT file. Also download KextBeast onto the USB. Leave everything as zip files.
3. Upgraded to 2.20 Bios if you haven’t already done so. (This is probably not necessary. Initially I had a partial working install where I used a DSDT file that was for the 2.20 Bios.)
4. Change BIOS Settings. Set to ACHI and change the Shared memory in North Bridge configuration to 64 MB (DVMT) yes this is okay some how. Set Graphics to Onboard.
5. Insert Unibeast USB and start install. Partition disk using GUID table into 3 Segments with the Disk Utility and then select the first partition.
6. After restarting, select “My computer does not connect to the Internet.”
7. Run Multibeast up and then select EasyBeast and then okay.
8. Unzip the Broadcom zip file onto the Desktop and run Kextbeast. Restart.
9. The system should be able to restart (I once used the HD4000PlateformId=9 boot flag, but it shouldn’ be necessary if you set the shared memory to 64 MB. http://blog.stuffedcow.net/2012/07/intel-hd4000-qeci-acceleration/). After booting, open up Multibeast and select Drivers>Audio>Without DSDT>ALC8xxx. Restart.
10. Now the audio (only got the rear port working!), networking, and video should work. You may need to update itunes, and safari.

Overall I spent several days working on this and ended up with this less than optimal solution. However, it should be sufficient for my needs and audio, networking, and video seem to work. Putting the GTX 460 graphics card back doesn’t seem to do anything to the install and I don’t think it is being used by mac os x 10.8.2. Now time to finally install Ubuntu and test CUDA.