Archive

Archive for the ‘Education’ Category

Converting xfig (.fig) to vector formats using dktools

March 28th, 2012 No comments

Recently I needed to convert old xfig files to inkscape while preparing figures for my dissertation. Luckily I found out about dktools. However, there wasn’t a convenient package in an ubuntu repository. Fortunately, it’s fairly easy to compile. Just in case someone finds this useful, I’ll leave installation instructions here for Ubuntu users:


sudo apt-get install libpng-dev libdb5.1-dev libbz2-dev libjpeg-dev libtiff-dev libsnmp-dev
./configure
make
make install

To convert a .fig to .svg:

fig2vect -lsvg fig.fig fig.svg

Categories: Education, Useful Apps Tags: ,

Subrepositories in Mecurial (hg)

December 3rd, 2011 No comments

So I initially created a mercurial repository to house my research related publications, presentations, and thesis. However, as these publications generally have coauthors, sharing parts of my repository was necessary. However, I did not want to run two repositories, and would ideally want to keep my current repository structure as it makes sense for me. On the other hand, I do not want to share my whole publication repository with all my coauthors, when what they are really interested in, is the particular publication we are working on. I do however want my collaborators to view my ‘hg log’ history. Fortunately, mercurial (hg) actually can account for this through use of subrepositories. However there is a bit of tricky in setting things up.

1. First, I generated a separate mercurial repository using the current repository, but only for the desired subfolder. The following is a bash shell script that handles moving the logs and desired files over. The first argument is the subfolder you want to make into a repository. The second argument is the path to the parent directory. The third argument is the desired location of the new repository.

echo include $1 > /tmp/myfilemap
echo rename $1 . >> /tmp/myfilemap
hg convert --filemap /tmp/myfilemap $2 $3

2. Next the subfolder is removed first from the parent repoistory via ‘hg remove path/to/subfolder’.
3. Following the instructions from the mercurial wiki, one goes to the root directory of the parent repository and create the necessary .hgsub file. In my case, my subfolder was 2 levels down, so the following was used. The paths are relative the the root of the parent directory.

folder/subfolder = folder/subfolder

4. ‘hg add .hgsub’ in the parent directory.
5. ‘hg clone subfolder’ in the folder/subfolder.
6. ‘hg commit -m “description here”‘

At this point, it seems everything works as desired. Pulling, updating, and cloning the root directory records the changes of the subfolder, while the subfolder can itself be updated/pulled/and pushed in accordance with collaborators.

Using paraview as a post-processor.

October 19th, 2011 No comments

I reviewed paraview about two years ago, and I more or less lambasted it for not being very well documented and that the mailing list was not as responsive to “beginners” questions as one might imagine. Some of the examples on the wiki also did not work back then. I have been quietly monitoring and occasionally evaluating paraview to see how it has been improving. In one of the recent versions this year, they actually released the user guide/handbook, which contains some useful information. The file reader formats have greatly improved and a lot of the features rival that of tecplot, which is not free.

While my research lab recently acquired Abaqus, all of my dissertation code is in the Finite Element Analysis Program (FEAP) from Berkeley. It was written mainly by Robert Taylor (one of the Godfathers of finite elements) and by Sanjay Govindjee. It is a pretty good code, and is written mainly in Fortran/C. Unfortunately, while it includes some sophisticated numerical schemes and efficient solvers, it does not provide much in terms of pre- and post-processing for finite elements. In the past we used Ansys to generate meshes, and manipulated the boundary conditions by hand. However, we have since then, obtained patient-specific heart meshes, and the number of nodes is much larger and the geometry/numbering is not so obvious.

Fortunately, paraview has some powerful selection filters and tools available. Unfortunately, after each selection the number of elements and nodes is renumbered. Paraview actually keeps track of the global id’s but for some reason it is filtered out from the spreadsheet view by default. However this can be changed easily. To use paraview as a pre-processor, the idea is to use successive “Extract Selection” filters after using the “Fustrum” or “Surface” selection tool for nodes/elements in the Selection Toolbar. One should think of it as taking the original mesh, and then slicing or cutting away the parts that are not of interest until you are left with only what you want. Lastly, in the selection explorer, you can also select “Invert Selection”, which effectively allows you to select the portion you want to “cut-away” from the mesh.

The following are instructions to perform this pre-processing selection using paraview.

  1. If not already done, go to Preferences > Charts. Then delete the line that says “vtkOriginalIds” that is in the “hidden” list. This will show the “vtkOriginalIds” value in the spreadsheet view and allow you to “Save Data” on each “Extract Selection”, such that one will get a mapping from vtkOriginalIds to the renumbered fields.
  2. Use the provided selection tools, and “Extract Selection” successively until you arrive at the selection you want.
  3. Click on each “Extract Selection” filter and click on File > Save Data. Select a filename, and make sure to specify either Point/Cell/Field data so you save the proper mapping.
  4. Next you can use the following python script to get the proper mapping using 0-index numbering (paraview indexing). The script takes in a list of “Extraction Selection” csv files and an outputfilename for the global id mapping.
Script source is below:
import os,sys
 
def parse(args):
    extractions, outputfile = map(open,args[:-1]),open(args[-1],"w")
 
    ids = []
    newids = []
 
    for extraction in extractions:
        origids = []
        for line in (extraction.readlines())[1:]:
            orig= line.split(",")[0]
            origids.append(int(orig))
 
        # if ids is filled map rewrite ids accor
        if len(ids) > 0:
           for id in range(len(origids)):
               origids[id] = ids[origids[id]]
 
        ids = origids
 
    for id in ids:
        outputfile.write("%d\n" % (id))
 
    outputfile.close()      
 
if __name__=="__main__":
    parse(sys.argv[1:])

Migrating old XP+ computers to virtual machines

October 28th, 2009 4 comments

Since my current laptop (2004) is pretty much on its last legs and I don’t feel like wasting or looking for my XP windows keys, I figured I would just migrate my laptop. Similarly, my desktop that was built in 1999 (Pentium 4 A – no hyperthreading) with 512 Mb of ram and non-functional USB ports and dead sound card, has some useful data on it I figured I might as well start with that and see how much of a hassle it would be to backup those systems. I figured the best way would be to some how ‘dd’ a hard disk and then run it in VirtualBox. However, I remember reading something about problems with Windows “memorizing” hard disk information and in general ‘dd’ is super slow.

Luckily I ran across this Lifehacker article via a quick google search. It doesn’t contain much information, except that it mentions this new (Oct 9,2009?) SysInternals tool called ‘disk2vhd’ which supposedly makes imaging windows drives a piece of cake. Some other Ubuntu googling searches resulted in similar recommendations but also noted that there could be some IDE/SATA driver issues and that you had to fix these driver issues “apriori”. I won’t go into the details since it seems all of this can be fixed in the image afterwards.

So I went about my way, running ‘disk2vhd‘ on 2 windows drives which totaled 50+ Gb. This went smoothely on my abandoned desktop and it took about an hour or two to create the VHD. I copy over the VHD, which took a few more hours (did it overnight). However, when I tried to run the VHD in virtualbox it said I had a grub error (error 21). I figure this has to do with the fact that the MBR points to grub, which lives on the hard disk I didn’t image. Anyways a quick google resulted in grabbing SuperGrubDisk. I just ran it, and selected the “Win” option as I had no intention of creating a dual boot VHD system. This fixed the grub problem.

Windows XP started booting up and then I ran into the infamous “0x0000007B” boot BSOD issue. It has something to do with windows not liking a difference in IDE hardware. At this point another quick google seemed to indicate I would have to do the “apriori” fix step. However, I wasn’t about to go spend another 8 hours trying to backup my computer as I already spent enough time trying to make it work. Luckily I ran across this confusing forum post, which hinted that the Ultimate Boot Cd for Windows (UBCD4Win) had something called “Fix_hdc” that could possibly fix the drivers issue. Luckily I had a copy of a UBCD4Win iso lying around. So I load up UBCD4Win iso and find ‘fix_hdc’ and just selected the “Usb option”.

Magically, it seemed to do the trick, and after a reboot of the virtual machine, everything worked flawlessly.

Summary:

  1. Run “disk2vhd” from SysInternals on the drives you want to image. (Optionally, run mergeIDE and so forth to possibly avoid step 3.
  2. If you had some multi-boot grub setup and grub doesn’t live on the same partition (which it probably doesn’t), use a grubrecovery disk such as SuperGrubDisk.
  3. If you get the “0x0000007B” BSOD error during boot, just grab UBCD4Win, boot it up, and then run Fix_hdc. It should be in Start>Programs>Registry Tools>FIX_hdc>Fix Hard disk/USB

Overall, the process was fairly painless. Mostly just clicking a few “auto” buttons. Virtualbox seems to run well, and so far the experimental 3D drivers seem to play relatively nicely. I’ve watched some movies through VM and played a few games (none of them super resource hungry). Music seems to work fine. Ironically, when I ran Windows 7 earlier this year, the audio did not play smoothely, and video clearly didn’t work. Seems that windows media player plays video much more jerky than VLC which is less noticeable. I don’t remember off-hand what my old vlc settings were though, so I can’t comment much.

I’d say the conversion was a definite success, and I will do the same with my old computers so I will have access to software that I can no longer fine that I had installed on my old machines. At any rate, I now have a “working” backup of my old system which can run in several VM packages (VirtualPC, VirtualBox, VMware). I’m fairly certain that such formats will continually be updated (virtualization seems to be the future of everything), and thus it should be possible to keep a copy of my old machines with me on my new computers in the future.