Archive for the ‘Education’ Category

Screen door Bahtinov Mask

August 24th, 2014 No comments

After making and using the Paper Bahtinov Mask for awhile, I decided to try to make a better, cheaper, and perhaps more robust mask. The original Paper Bahtinov Mask still works, however, it’s still a little too delicate for my tastes, and I would rather have a more resilient foldable solution for the times when I go on hiking or camping trips in the wilderness where compactness is important.

When mulling it over, there were several key considerations that lead to the idea of using screen material as a base to build a Bahtinov Mask. The first is that screen material is fairly robust and cloth-like and so it’s relatively easy to store and it’s decently strong. The second is that the unique features of the traditional Bahtinov mask rely on the existence of the particular set of parallel slits in each direction. It’s the obstruction of light in a particular orientation which creates the strong signal (“light streak” that can be seen when zoomed in on the view finder.

By the same line of reasoning, using a screen as a base in theory shouldn’t have much effect on the diffraction pattern, other than reducing incoming light by a tad bit. The reasoning is that the screen pattern obstructs light “equally” in horizontal and vertical directions. On average if the obstruction is small, the directionality of the bands is somewhat mitigated. If we align the portion of the Bahtinov mask that corresponds to the “large streak” (the largest portion that goes in a vertical or horizontal direction) with the screen door, we can ensure that that signal doesn’t get attenuated. The perpendicular orientation of the screen will then attenuate the symmetric portions of the mask equally; therefore the overall effect is that a stronger middle streak should occur., but the overall characteristics of the Bahtinov mask are retained.

With that in mind, I decided to use a sheet of extra screen I had laying around and use painters tape ask the masking material to create the pattern. I did not generate a mask this time. From the post on the Paper Bahtinov Mask, the important insight about the Bahtinov Mask is that the symmetric smaller portions remain parallel and symmetric to create the banding patterns that yield the diagonal light streaks. Therefore, I chose to keep a “slope” for each of the slanted portions that was easy to eyeball, and looked aesthetically pleasing, which ended up being a 3:1 vertical to horizontal ratio.

(Note: I understand of course that the angle of the smaller symmetric portions affects the angle of the diagonal lines, but as long as it’s visually appealing – I’m not using a computer to aid my focusing – I think it will be more than sufficient. The angle doesn’t affect the construction anyways.)


The construction is relatively simple. First trace a circle with a sharpie or some other marking utensil that covers the appropriate mask area. Next decide how thin you want each of the slots and bands of the mask. In this trial, I chose 2mm, because I can cut fairly straight slivers of painters tape at that accuracy, and because the more slots I have, hopefully the better signal I can get. The screen material was also spaced at roughly 2mm.

Start of construction

Start of construction

After marking the area which I want to cover and then using a slightly thicker piece of tape (4mm) to signify two halves, I placed a third vertical 2mm band over the middle to section the smaller symmetrical diagonal portions. I then proceeded to mark and cut 2mm long bands of painters tape and then applied them to the screen material. Diagonal bands were placed at a 3:1 vertical to horizontal ratio, and the large bottom portion contained vertical bands. The diagonal bands were more or less “eyeballed” at an even spacing and checked with symmetry. The vertical bands were placed 2mm (one screen grid spacing) apart. The mask took about 20-30 minutes to cut and piece together.

Cutting the bands made of painters tape

Cutting the bands made of painters tape

Mask used for the demosntration

Mask used for the demosntration

Demonstration Test

The Screen door Bahtinov Mask is attached to the camera by centering the mask then folding the remaining material around the lens. A rubber band or temporary zip-tie is used to hold the screen in place while one focuses manually. After focus is achieved, the zip ties or rubber band is removed.

The following pictures show the mask in action. In this test I unfortunately used a headlamp that has two LEDs thus if you zoom in you will see not one, but two sets of light streaks. The pictures were taken from a chair with a 2 second delay.

original image

original image

focused image

focused image

And the in-focus image.

focused image

focused image

After usage, I noticed that some of the bands of painter tape weren’t reinforced onto the screen too strongly, so I added some thicker bands of painters tape on the periphery as well as in the main horizontal and center vertical band


As you can see here, the Screen door Bahtinov Mask seems to work fairly well. I’d warrant to say it works as well as my Paper Bahtinov Mask which I used for a few nice panoramas at Sequoia National Park. Here’s one of my 500px photos where I used the Paper Bahtinov Mask for focusing.

There are definitely a few improvements I could make. Placing painter’s tape on a screen naturally evokes thoughts about screen printing. I don’t have a lot of experience in this area, and you would have to go through the screen printing masking and etching process for every few Bahtinov masks you want to make. A similar idea that I am proposing is to 3d print a relatively large yet fine Bahtinov Mask at a thin thickness (say 4-5mm). Place a screen over the thin 3d printed mask. Next use a spray on plastic like PlastiDip and spray the mask several times. Replace the sacrificial spray base material until a thin layer of plastic forms atop the screen which is atop the Bahtinov mask. Lastly, peal the screen from the mask, and hopefully one can achieve a beautifully opaque bahtinov mask. If one prints a large 3d mask that covers up to 77-80mm, that should pretty much cover most camera lenses. Making smaller lenses is simple, as the large mask can be reused. One simply covers part of the screen such that a smaller portion of the mask and screen are exposed to Plastidip. If anyone tries this out, please let me know if it works.

Legacy Lenses (glass) and Adobe Lens Profile Creator Lens Profile Creation Analysis

August 24th, 2014 No comments

So I recently came across my father’s old stash of analog SLR cameras and it occurred to me that I could probably reuse those lenses (Olympus OM 50mm f1.4 and Helios 44-2 58mm f2.0) on my NEX 5R. So after polishing up the lenses, I thought they might be useable and searched on Flickr for a similar setup. To my surprise there were quite a few sample images from the NEX 5 + OM 50mm/f1.4 and NEX 5+ Helios 44-2 combo. The Flickr samples seemed decent so I picked up a few Fotasys adapters which were really cheap (10-15) a piece and they seem to be built solidly.

Probably the only issue I have with them is that neither of the adapters really “locks” in the lens. Occasionally I can feel the OM 50mm and Helios 44-2 shift a bit at which I tighten it back on to adapter. Generally it doesn’t happen, and when the OM 50mm I didn’t have any problems with it slipping or falling off.

Here are a few sample images from the OM 50 f1.4.

Mount Diablo Summit Trail (OM 50mm f1.4)

Mount Diablo Summit Trail (OM 50mm f1.4)

OM 50mm f1.4

OM 50mm f1.4

So as you can see the pictures are decently sharp, and the bokeh is pretty decent. These lenses retail for $50-$70 on Ebay. They aren’t expensive by any margin, but are decent fast manual prime lenses. Of course part of the beauty of these lenses is that they produce their own “style” of images and there’s something fun about using old analog SLR gear on a new digital MILC. The quirky combination works pretty well with the focus peaking on the NEX system (although not perfect).

As an Adobe Lightroom user, I’ve also been doing a lot of post processing on my images. I realized that it’s just part of photography (the digital equivalent of darkroom work) and that it’s one thing to take good images, and it’s another to enhance them to make them awesome images. Lightroom has some preset lens profiles that allow you to correct distortion, vignetting, and chromatic aberration (silhouettes and “weird” glare around objects) in Lightroom automatically. I installed the Adobe Lens Profile Downloader to see if the OM 50 f1.4 or the Helios were available, and of course they weren’t!

Adobe Lens Profile Creator

It turns out Adobe also has provided the Adobe Lens Profile Creator (ALPC) program. This program comes with calibration charts (grids) that I’ll explain in a moment, as well as some sparse documentation about how to operate the simple ALPC program. As it turns out, the program is “simple” to operate, but does not “simply” work all the time. It utilizes its own algorithms and such and there are some caveats and related trickery in getting ALPC to spit out a lens profile. In fact, ALPC confounded me for 2-3 days, before I got a working profile (calibration for one f-stop), and then an extra 2 days of experimentation to finally get everything working in one go. The rest of this blog post will delve into the details of how one creates a basic and complete lens profile for manual primes. I’ll also give some observations as to how I think ALPC is working and the most efficient way to shoot the calibration photos, and also show some workarounds I’ve tested in creating different calibration charts conveniently for proper lens calibration at a range of focal lengths. I assume these observations and work around will work equally well on zoom and wide angle lenses, because it’s all related to how I believe ALPC is programmed and what it’s trying to do mathematically at a general hand-wavy level.

So to begin I’ll describe the basic procedure summarized from the ALPC documentation. It does a decent job of describing the process, but leaves some details for you to figure out. I will try to clarify as many details as I can with some pictures, so as to hopefully help others avoid the frustration I encountered with calibrating the lenses. I mean – does anyone really enjoy generating lens calibration profiles? Probably not! But if everyone could easily (and yes, everyone can with ALPC) generate a lens profile for their lens, it would collectively save people a lot of time, and also foster further community within photography.

The basic procedure is to shoot a set of 9 images at a given setting: exposure, aperture, focus distance, and focal length. Each of the pictures should be in focus. You can import these photos and ALPC will go through each set at a particular focus distance and create a lens distortion profile based on the information. Since distortion varies based on focal length for zoom lenses, you would have to redo these sets for different focal length distances also.

ALPC Issues and possible workarounds

We’ve finally reached the portion of the blog post where I bring in the technical specifics, hackery, and process optimization together. We’ll start by looking at some of the issues and frustrations I found with ALPC.

Howto shoot proper calibration photo sets and fix chart printing errors

One of my initial frustrations was with the calibration charts. Initially the grids seemed a little arbitrary, but I will explain what the calibration information means. Each calibration chart label describes how many Y x X squares are on the chart. ALPC doesn’t look at the outer grid for calibrating the lens profile information. It looks at the points that are within and not on the border of the grid. This means that if there are 13 x 19 squares, that it will be looking for 11x 17 points = 187 points. Next, the label at the bottom describes how many points each square. This is related to the size of each square. A point is 1/72th of an inch. Smaller sized squares will help generate more points which will help create a more accurate lens distortion calibration profile.

Some of my frustrations were related to available calibration chart size, and also related to printing the calibration charts on 8.5×11 US letter sized paper using a laser printer. To start, if your laser printer occasionally skips over a specific line (prints it lightly) or misses a few specks here and there, ALPC will fail to find some points and mark the image as invalid. After studying which points ALPC missed (it will highlight the grid points it recognizes in yellow), I realized it was due to the missing white specks. The same occurred for a line of lightly colored printing. The solution was as simple as finding a black ball point pen and filling in the lightly-printed or white speck portions.

A separate issue is related to calibration chart size, which in turn is related to ALPC’s ability to detect geometric distortion. More on geometric distortion first, and then to calibration chart size issues which are related to different focus distances.

After correctly “fixing” my slightly mis-printed 8.5×11 calibration charts, ALPC would find all of the points. However, it would give an ambiguous error saying that there wasn’t enough variation in the images. After searching online, I couldn’t really find a good answer. As it turns out, the error is correct. In order to generate a geometric lens correction profile and proper vignetting profile, you ideally want to test the maximum distortion your lens will give you. The obvious way to do this is to compare the edges of your lens to the center, where distortion is the least. This gives you 9 obvious quadrants for imaging: the center and edges. In order to get the maximum distortion, the solution is to make sure the calibration chart (the grid portion, not including the labeling) is smaller than 1/9th of the photographic picture frame. This allows ALPC to extrapolate roughly how much distortion occurs by looking at how the grid points warp when you are not shooting the grid in the center. The documentation specifies this as a passing point, but I’d like instead to rephrase their suggestion as a necessary point. The calibration grid NEEDS to be AT LEAST 1/9th of the picture frame OR smaller. The grid doesn’t have to be EXACTLY 1/9th, as the documentation, seemed to imply. It needs to be 1/9th at the largest, but should be no bigger!

If you think about it, having a calibration grid that is smaller will still test the distortion of the lens effectively, just that there are less grid points to average the geometric and vignetting distortion parameters over, in terms of accuracy. However, you will get the same maximum distortion with the same settings whether your grid only has 5×9 points or 13×19 assuming the grid is close to the edge of the lens where distortion is obviously the greatest. The number of points just increases your accuracy, it doesn’t necessarily affect the correctness of your distortion!

To be honest, I’m can’t really think of a good reason as to why having grids that cover larger areas (over 1/9th) doesn’t work. Perhaps the algorithm implementation they are using relies on having little to no overlap. In that case, it is a weakness of the algorithm implementation rather than an issue with lens profile algorithms.

Easy quick and dirty custom-sized calibration charts

In any case, after both “fixing” calibration chart printing errors and making sure the grid is at most 1/9th the area or less allowed me to generate accurate profiles without any errors with ALPC. However, the focus distance even in the case of a 50 and 58mm prime is limited given one 13×19 54 point chart that fits on an 8.5×11 in piece of paper. Since I didn’t want to generate larger or smaller charts, I printing a second copy of the 13×19 and then set the camera up to focus on the minimal focus distance of my camera. I took a picture and then determined the dimensions of a grid that would have black squares at the corners (you’ll notice this in all the calibration charts) and would fit in 1/9th the area of the camera frame.

From my explanation of calibration charts before, you’ll realize that the only parameter that changes is the Y x X grid parameters. The size of the square (point (pt.) size) remains the same. After noting this, the first thing I tried was to use blue painters tape and cover up over the parts of the grid I didn’t want ALPC to recognize. I cut the grid such that painter’s tape width would easily cover up the unwanted squares. To my surprise, ALPC did not recognize any points on any of the images. My observation and reasoning behind this, is that ALPC’s algorithm to locate grids actually relies on having a white border around your grid. After locating a Y x X number of squares, it then goes in and looks for continuous black and white segments to identify grid points. This therefore explains the issues with printing errors and white specks. It confuses the algorithmic part which looks for grid points. However, the border is what determines where the grid is!

With this hypothesis, I proceeded to use a second copy of the same chart. This time I cut the grid in a specific way, so as to allow it to be taped to the wall with painters tap, without necessitating, creating custom calibration charts which you always have the option of doing. The following is a simple illustration of how one can cut a calibration chart out of a larger calibration chart and ALPC will recognize this chart properly.

Quick ALPC custom chart hack

Quick ALPC custom chart hack

As you can see, the resulting chart is a 5 x 9 36 pt chart created from a from a 13 x 19 36 pt chart. The corners of the grid are still black squares, but ALPC is not bothered by the extra white squares attached to the grid.

(Note: I did not test what would happen if my background isn’t white. I used a white wall for my calibration pictures, so obviously this could be a problem. The solution however is trivially simple. Just mount this grid on a 8.5×11 sized white piece of paper.)

ALPC calibration set labeling and EXIF fun

After this beautiful, elegant, and simple “hack”, ALPC was able to figure out the lens calibration parameters necessary for the images. I was able to generate lens profiles and append different focus distances to the same file.

The last issue is related to ALPC lens profile automatic naming conventions. Obvously on my fully manual legacy film primes from the late 70s,  my Sony NEX 5R doesn’t know what to put for Lens Make and Model in the EXIF information. Luckily exiftool is pretty well written. I’ve created a simple python script that will add the EXIF tool to all the JPG or all the DNG files in a specific directory. Since Sony NEX5R outputs Raw (.ARW) and JPEG, I had to use Adobe DNG converter to convert all the raw files to DNG and then apply the exif tool modifications. The EXIF script I wrote allows you to simply add in the Focal Length, the focus distance (subject distance) and the aperture value along with a string the lens name. This is pretty much all the ALPC program will read from the EXIF of the pictures. Without this information, your profiles will have “undefined” for each of the focus distance groups. Importing pictures with the EXIF modified for each picture using the above script, will result in properly populated focus distance group labels. However, the Lens Make may not be properly populated.

The reason for this error, is because ALPC and Lightroom, apparently only support a limited list of Lens Makers. While the list is fairly expansive, it doesn’t cover everything. By this I mean, Russian branded Helios lenses. Olympus was supported, and so ALPC will properly populate my Olympus OM 50mm f1.4. However the Lens Make for my Helios 58mm f2 was left blank. When creating the profile of for the lens, the resulting lens profile (.lcp) file is actually an XML formatted file. Looking in the file, you will see the lens distortion parameter values and so forth. You can also view and edit the Lens Make and Creator website and so forth. What you will see, however, is that even if you modify the Lens Make later using ALPC, even before generating the lens profiles, the Lens Make will default to your Camera Make, which in my case is Sony. Now changing the XML to describe Helios didn’t show up in Lightroom, therefore, in the end I had to stick with Sony Make, and Helios as the name of the lens and the lens profile.

Here’s the link to my script. It is a companion script for exiftool to modify a folder of 9 images that make up a calibration set. Simply pass the folder path and filetype (.DNG or .JPG) for the corresponding filetype for your calibration set, and the script will prompt you for Focal Length, Aperture, Subject Distance (or Focus Distance) and the Lens Model Name.


python [calibration set location] [dng or JPG]


This concludes my somewhat exhaustive study on ALPC and lens profile generation. Hopefully this has been helpful to other legacy and perhaps telescope + camera photographers. An observation I left out, is that ALPC seems to suffer from I/O bottlenecks especially when reading large number of DNG files, when performing calibration with several apertures and focus distances. Perhaps in the future, I will write a GPU/OpenCL accelerated version of geometric, vignette, and chromatic aberration lens profile. I may create a lens generator that is less picky about the size of the pictures, one that works faster, and one that is more resilient, even given a lower number of pictures. However, overall, I am satisfied with the overall lens profile correction results, and I can at least make due with ALPC as it exists. I have been happily shooting with my legacy lenses knowing that I can fix the distortion properly in Lightroom with the click of a button.

ASrock Z77 Extreme4 – Hackintosh (10.8.2) log

December 19th, 2012 2 comments

It turns out making this machine into a hackintosh is not as simple as it seems, especially for a first time hackintosher. I tried to initially include a GTX460 in the machine while installing, and while it did work I had issues getting Intel HD4000 Graphics working properly (However, I was able to get the Nvidia GTX460 output working surprisingly.) I also tried several configurations of ACPI hacking and decided after installing os x about 40 times that I’d just go back to using EasyBeast.

Computer Specs:
ASrock Z77 Extreme 4
I5 3570K
ASrock Z77 Extreme 4 Bios 2.20 version

The system was brand new and I wanted to triple boot in the future. I don’t have a Windows License key and I don’t know which version of Windows I would like, so I will skip that for the moment.

1. Create Unibeast USB bootable drive. Remember to completely erase the disk in Disk Utility and then create the Disk. Overwriting on the same partition does not seem to actually erase all the files in the directory. Also later on do not leave any DSDT.aml files in the root location. It seems zip files are fine though.
2. Download Multibeast and grab the Broadcom kext (driver) patch from ( The files include the networking and DSDT file, but here because I’m using Easybeast we can ignore the DSDT file. Also download KextBeast onto the USB. Leave everything as zip files.
3. Upgraded to 2.20 Bios if you haven’t already done so. (This is probably not necessary. Initially I had a partial working install where I used a DSDT file that was for the 2.20 Bios.)
4. Change BIOS Settings. Set to ACHI and change the Shared memory in North Bridge configuration to 64 MB (DVMT) yes this is okay some how. Set Graphics to Onboard.
5. Insert Unibeast USB and start install. Partition disk using GUID table into 3 Segments with the Disk Utility and then select the first partition.
6. After restarting, select “My computer does not connect to the Internet.”
7. Run Multibeast up and then select EasyBeast and then okay.
8. Unzip the Broadcom zip file onto the Desktop and run Kextbeast. Restart.
9. The system should be able to restart (I once used the HD4000PlateformId=9 boot flag, but it shouldn’ be necessary if you set the shared memory to 64 MB. After booting, open up Multibeast and select Drivers>Audio>Without DSDT>ALC8xxx. Restart.
10. Now the audio (only got the rear port working!), networking, and video should work. You may need to update itunes, and safari.

Overall I spent several days working on this and ended up with this less than optimal solution. However, it should be sufficient for my needs and audio, networking, and video seem to work. Putting the GTX 460 graphics card back doesn’t seem to do anything to the install and I don’t think it is being used by mac os x 10.8.2. Now time to finally install Ubuntu and test CUDA.

The Julia Language and C/Fortran interface

October 6th, 2012 No comments

I developed many finite elements during my Ph.D. research. Many finite element programs are built upon a mix of C/C++/Fortran. I had felt more comfortable with C/C++ over Fortran, and I have become fairly familiar with the C/Fortran calling conventions for struct/common blocks and etc.

Now that I’ve graduated, I no longer have access to Matlab for rapid prototyping and rapid checking of solutions to differential equations that I may be interested in. I recently ran across Julia which is a newer language. I decided to give it a try as I’ve done with R, Octave, and a slew of other “Matlab” clones. Julia has some nice features such as parallelism that I have not touched. It’s definitely not a Matlab clone, but it shares many similarities, and has some degree of built-in parallelism.

Recently, I’ve been involved in another material modeling project, and I’ve decided to use Julia during this project as a “rapid prototyping” replacement for Matlab. I found a gnuplot interface that seems to work pretty well called Gaston that seems to work pretty well. However, ultimately my code must run in Fortran for this project where it is compiled into the finite element code as a user material routine. Therefore, I decided to try to use Julia’s C/Fortran calling interface.

To do this in Julia, you must compile your subroutines and functions into a shared library. To illustrate this, we will look at the following fortran code that will help us investigate the underlying structure of arrays, matrices, and 3rd-order tensors in Julia.


c matrix test file
	subroutine vec(n,vector)
	integer n
	real*8 vector(*)
	write(*,*) (vector(i), i=1,n)
	subroutine mat(m,n, matrix)
	integer m,n
	real*8 matrix(m,n)
	write(*,*) ((matrix(i,j),j=1,n),i=1,m)
	subroutine tten(m,n,o,matrix)
	integer m,n,o
	real*8 matrix(m,n,o)
	write(*,*) (((matrix(i,j,k),j=1,n),i=1,m),k=1,o)

We can then compile the fortran code into a shared object by the following:

gfortran matrix.f -o matrix.o -shared -fPIC

Then in Julia we can paste in the following code:

matrix = dlopen("matrix.o")
vec_ = dlsym(matrix,:vec_)
mat_ = dlsym(matrix,:mat_)
tten_ = dlsym(matrix,:tten_)

a = linspace(1.,4.,4)
a = linspace(1.,4.,4)'

b = ones(Float64,4,1)
c = b * a

d = ones(Float64,4,4,2)
d[:,:,1] = c

The conclusion is that the underlying representation in Julia matches the ordering you would expect in Fortran for arrays, matrices, and 3rd-order tensors. With a bit of syntactic sugar, one can mask all the pointer casting, and create faster C/Fortran versions of the same Julia code. Hopefully, this languages provides a good way of rapid prototyping codes in a higher-level language, and then porting pieces to more traditional compiled object codes into high performance computing frameworks.

Julia isn’t the easiest language to use at the moment, but it seems to hold much promise. I had to write my ODE solver, which I guess is not too difficult. Now that I have this C/Fortran interfacing figured out, it should be trivial to add in ODE support from libraries from the national labs that have been well tested.

Categories: Education Tags: , ,