## yt development: External Analysis and Simulation Code Support

This last week was the first full week on BitBucket and so far I think it has been quite successful. The new development process is for most of the core developers to maintain personal forks for experimental changes, or longer term changes, and then to commit directly or merge when bug fixes or features are ready to be integrated. The list of forks is easily visible and each individual fork's divergence from the primary repository can be viewed by clicking on the green arrows. All of the new mechanisms for developing using BitBucket are included in the 'How to Develop yt' section of the documentation.

This last week I spent a few days at KITP's Galaxy Clusters workshop, where I presented on yt. There were a few major points that came out of my visit, talking to the simulators there, that are germane to the long term development of yt.

• As time goes on, yt should be increasingly viewed as a mechanism not just for analyzing data with its own, internal analysis routines, but as a mechanism for handling data, transforming it into a uniform interface independent of the underlying simulation code. This will allow for linking against and utilizing external analysis codes much more easily. (The three examples that came up while I was at KITP were a new halo finder, a weak lensing code, and a radiation transport code.) To facilitate the process of calling external codes from within yt, I've written a section in the documentation that covers it.
• There is a great deal of interest in ensuring yt works equally well with many different simulation platforms. This is a primary goal of my current fellowship, and I am working toward it. The next two codes that will be targeted for improvement are Gadget and ART, and I made good contacts at the workshop to this end.
• The idea of analysis modules, particularly in a block-programming environment, is compelling. There is quite a bit of interest in an interface where inputs and outputs were handled like pipes. I am still formulating my ideas on this. Last Fall I experimented a bit with an introspection system that could handle arguments and could hook up pipes, but it never got very far. (The code.)

I had a number of scientific takeaways from the meeting, too, but that would all go into a different blog post.

This week I hope to finish up the adaptive ray tracing. This past week StephenS unveiled a halo serialization mechanism, which I Think many people are excited about, and SamS continued developing his streamline code.

Author: Matthew Turk
Published on: Mar 7, 2011, 12:17:09 PM
Permalink - Source code

## yt development: BitBucket, Task Queues, and Streamlines

The major changes this week came mostly in the form of administrative shifts. However, SamS did some great work I'm going to hint at (he'll post a blog entry later) and I started laying the ground work for something I've been excited about for a while, an MPI-aware task queue.

### BitBucket

For the last couple months, yt has been struggling under the constraints of the hg server on its hosting plan. The issue was that particular files checked into the repository (docs_html.zip for one, which is now gone, and amr_utils.c, also gone, for another) took a while to transfer over some connections. During this transfer, the (shared) hosting provider on hg.enzotools.org would kill the server process, resulting in an "abort" message given to the cloning user.

Basically, this was kind of awful, because it meant people couldn't clone the yt repo reliably, and it also meant that the install script would fail in unpredictable ways (usually indicating a Forthon or setup.py error.) I'm kind of bummed out that I didn't do something about this sooner; I suspect several people probably have tried to install yt and failed as a result of this. I added some workarounds that staged the download of yt over a couple pulls, which usually fixed it, but there was no reliable solution.

Enter BitBucket. A few of the developers had been using BitBucket for private projects, small repositories, and even (especially) papers that we'd been working on. For a while we'd been talking about moving yt there and trying to leverage the functionality it brings for Distributed Version Control Systems -- like forking and pull requests, social coding, and on and on -- and last week we hit the breaking point. So we created a new user (yt_analysis) and uploaded the yt repo, the documentation repo, and the cookbook, and we're going to be conducting our development there. The old addresses should all still work -- we have forwarded hg.enzotools.org to the new location.

One of the coolest aspects of this is that anyone can now "fork" the yt repository. What this means is that you can then get your own private version, which you can then make changes to very easily, and then submit them back upstream. I'm really excited about this and I would encourage people to take advantage of it. I've rewritten the Developer Documentation to describe how to do this.

All in all, I think this will be a very positive move. BitBucket has a number of value adds, including the forking model, but we should also immediately see a dramatic increase in the reliability of the repository.

### Streamlines

SamS has done some work implementing streamlines. Right now they operate by integrating using RK4 any set of vector fields, and then plotting their paths using Matplotlib's mplot3d command. He's working on some cool ways to colorize their values, and one of the things I am pushing for is to take any given streamline and convert it to an AMR1DData object. This would enable you to, for instance, follow a stream line in magnetic fields and calculate the density at every point along that streamline.

Once Sam's comfortable with the feature as-is, he's going to blog about it, so I won't steal the thunder for his hard work here.

### Task Queues

Building on the ideas behind the time series analysis I started work on the idea of a task queue that's MPI aware. When this is finished being implemented, it will act as a mechanism for dispatching work, which will be fully integrated with time series analysis. Right now it's not even close to being done, but a few pieces of the architecture have been implemented.

The idea here is that you will be able to launch a parallel yt job, but have it split itself into sub-parallel tasks. For instance, it you had 100 outputs of a medium-size simulation to analyze, you would write your time series code as usual -- you would describe the actions you want taken, how to output them, etc etc. You would then launch yt with a "sub-parallel" option, saying that you wanted to split the total number of MPI jobs into jobs of size N -- for instance you could launch a 64 processor yt job, telling it to split into sub-groupings of 4 processors each. Each output would then be distributed in a first come first serve fashion to each of the processor groups. When each group finished its job, it would ask for the next job available, and so on. When completed, the results would be collated and returned.

I'm excited about this, but right now it's in its infancy. I've constructed the mechanisms to do this within a single process space, with no sub-delegation of MPI tasks. The process of implementing this and properly integrating it with time series analysis is going to be a long one, but I am setting it as a task for the next major release of yt. If you're at all interested in this, drop me a line, and I'm happy to show you how to get started testing it out.

Author: Matthew Turk
Published on: Feb 28, 2011, 11:35:00 AM
Permalink - Source code

## Example HEALpix Contour Rendering Movie

In response to Matt's post on the HEALpix rendering update, I thought it would be worth posting an example movie. This shows the all-sky rendering of an observer moving from the front face of a simulation through the volume to the back face. The test simulation is 32 Mpc/h on a side with 64^3 root grid cells and up to 4 levels of refinement. At the start it looks like a disc because the entire simulation is in front of the camera and by the end it is all around the sides, indicating the simulation is behind the camera. Enjoy!

Download movie

Author: Sam Skillman
Published on: Feb 21, 2011, 12:21:01 PM
Permalink - Source code

## yt development: HEALpix and Contour Tree

This week there was not very much yt development. However, a few notes may be of interest. SamS has updated the HEALPix camera to support ordered projections; what this means is that you can now make volume renderings using a standard color transfer function, or even the Planck transfer function, that cover 4pi of the sky. I am still working on integrating a method for creating images easily, but for now the scripts from last week should work.

I worked a bit on improving the speed of the contour tree, but I am growing to suspect that a full new algorithm will have to be implemented. I have researching this, and I believe that the best method will require a 'union merge' data structure. Hopefully I will have something to report on this shortly. As of now, the contouring algorithm should be a factor of 10%-50% faster than it was a week ago, depending on the dataset characteristics. The idea of selection of astrophysical objects, rather than geometric objects, has been on my mind lately, and I committed a first pass at an API for this sort of selection. I hope to have more information about that in the near future, but I anticipate those development efforts to ramp up around the end of March.

Author: Matthew Turk
Published on: Feb 21, 2011, 10:14:53 AM
Permalink - Source code

## yt development: All-sky column density calculation

This week I added the ability to calculate all-sky column densities. This functionality uses HEALpix to calculate equally-area regions on the sky and then shoots out rays from a central source to some fixed radius, accumulating values of a field along the way. Although so far I've only used it to calculate column densities of "Density" it could be used for other values as well, including all- sky weighted averages of quantities.

This works via a new HEALpixCamera object, which accepts an Nside argument (the number of pixels is 12*Nside*Nside) as well as a center and a radius. It's still a bit rough around the edges, but currently it will work to generate projections over the entire sky.

It's important to note that the difficulties inherent in sorting bricks radially prevent it from being used for transfer functions that involve any absorption. This means that currently it only works with "ProjectionTransferFunction" as an input.

To generate the projected pixels, we have to create a camera and ask it to cast rays along each pixel. This script will do that:

from yt.mods import *
import yt.visualization.volume_rendering.camera as camera
Nside = 32
pf = load('DD0008/galaxy0008')
cam = camera.HEALpixCamera([0.5,0.5,0.5], 0.2, Nside, pf = pf, log_fields = [False])
bitmap = cam.snapshot()


The returned bitmap will, as per usual, be an array of integrated values. Because we're using the projection transfer function, with the HEALpix camera, it will be an ordered pixel list of shape (12*Nside*Nside, 1, 4) where the first channel is ordered in order of pixels as per the HEALpix notation. We now have to convert this to a regularly gridded set of values, between 0 and 2pi and 0 and pi, for the theta and phi coordinates.

yt provides a helper function to go from pixel ID to angle (as well as a few other things). You can access this helper function in this manner:

import yt.utilities.amr_utils as au
from numpy import pi
phi, theta = na.mgrid[0.0:2*pi:800j, 0:pi:800j]
pixi = au.arr_ang2pix_nest(Nside,
theta.ravel(), phi.ravel())
img = na.log10(bitmap[:,0,0][pixi]).reshape((800,800))


The call to mgrid creates a regularly-spaced mesh of values. We then ask HEALpix what the pixel IDs are that fall into each of these regularly spaced mesh values, and then we apply those pixels in that order. This transformation will, someday, be implicit in the snapshot() call.

At this point we can plot our regularly spaced mesh using one of several projections. We'll do the Mollweide projection. To do this, we import the appropriate Matplotlib components and plot using the imshow command:

import matplotlib.figure
import matplotlib.backends.backend_agg

fig = matplotlib.figure.Figure((10, 5))
ax = fig.add_subplot(1,1,1,projection='mollweide')
image = ax.imshow(img,
extent=(-pi,pi,-pi/2,pi/2), clip_on=False, aspect=0.5)
cb = fig.colorbar(image, orientation='horizontal')
cb.set_label(r'$\mathrm{Column}\/\mathrm{Density}\/[\mathrm{g}/\mathrm{cm}^2]$')
canvas = matplotlib.backends.backend_agg.FigureCanvasAgg(fig)
canvas.print_figure('allsky.png')


As you can see, it's currently still a bit rough-around-the-edges (the image is below) but it's on the way to being very useful. In particular, setting Nside higher above should improve the image quality, and adjusting the axis display in matplotlib can remove the latitude and longitude display. There is also currently a bug (which I have reported but not heard back on) in matplotlib related to changing the size of the axes object.

Hopefully soon enough this will be ready to be wrapped more simply into yt, and it can be made available in a more straightforward fashion. For now, paste 1503 includes a full script that will all-sky render and output an image like the one below.

Author: Matthew Turk
Published on: Feb 14, 2011, 12:13:00 PM
Permalink - Source code

## yt development: Documentation

As a result of progress in my scientific goals, and the application of recent yt developments to them, I did not make many changes or developments in yt this week. When I did work on yt, I primarily spent time re-organizing the documentation and fixing several errors. I have added an "installation" section, consolidated a few sections, and wrote two new sections on how to make plots and on how to generate derived data products. I think that the documentation is now much easier to navigate and follow.

This week I also fine-tuned the API for the bridge that connects Enzo and Python, relying on feedback from other Enzo developments. This work is the first step in creating a generalized library for constructing initial conditions; it is not meant to be Enzo-specific. I will report more on this, but the ultimate goal is to be able to construct initial conditions using scripts, rather than constructing multiple C routines. This should make scientific experimentation much more accessible.

One final note is that this last week saw the first successful execution of a large-scale Enzo simulation with inline analysis provided by yt, using the EnzoStaticOutputInMemory object. BrittonS is driving that simulation forward, and he's also conducting benchmarks to see how expensive inline analysis is.

Author: Matthew Turk
Published on: Feb 7, 2011, 11:29:00 AM
Permalink - Source code

## yt development: Time series, and more

Not much yt development went on in the last week; I spent some time working with Enzo and driving forward simulation goals, which resulted in some development that directly benefited those simulation goals. However, this fortuitously coincided with work I have been eager to return to for quite some time: namely, time series analysis!

### Time Series Analysis

The problem with time series analysis in yt has, to this point, been an issue of verbosity and clunkiness. Typically, unless using the EnzoSimulation class (which is only available as of right now for Enzo) one would set up a loop:

fn = 'DD%04i/DD%04i' % (pfi, pfi)
pf = load(fn)
process_output(pf)


This has always been somewhat dissatisfying for me. The idea of a "time series" of data -- another data object, moving along a temporal dimension instead of spatial -- has been more alluring, but difficult to implement in a consistent way. In August of 2009 I tried my first attempt at this, creating TimeSeries objects that contained multiple parameter files, but it never went anywhere.

This week, overwhelmed with time series data, I spent some time bringing this functionality in line with the current state of yt. The idea behind it is that the underlying data and the operators that act on that data can and should be distinct. The code can be seen in yt/data_objects/time_series.py and yt/data_objects/analyzer_objects.py. I have created a number of analyzer objects which perform common tasks, as well as an easy method for generating your own analyzer objects.

As of the current tip of the repository, the method for generating and adding time series data is somewhat clunky; I can only guarantee it works with Enzo data where the file OutputLog exists. The examples below assume that.

You can now create a TimeSeries object and act on it; in this case we'll create an EnzoTimeSeries from the OutputLog, and we'll find the maximum density at all times.

from yt.mods import *
ts = EnzoTimeSeries('MyData', output_log = 'OutputLog')
max_rho = ts.tasks['MaximumValue']('Density')


You can see the (currently) available tasks by calling ts.tasks.keys(). You can also set up chains of tasks to be evaluated by constructing lists of AnalysisTask objects and feeding them to ts.eval and you can create your own tasks as well. For instance, if you wanted to look at the mass in star particles as a function of time, you would write a function that accepts params and pf and then decorate it with analysis_task. Here we have done so:

@analysis_task(('particle_type',))
def MassInParticleType(params, pf):
dd = pf.h.all_data()
ptype = (dd['particle_type'] == params.particle_type)
return (ptype.sum(), dd['ParticleMassMsun'][ptype].sum())
ms = ts.tasks['MassInParticleType'](4)
print ms


I'm very excited about this, and I think it will be a centerpiece of the next major release of yt (and its affiliated documentation.) I also hope to see BrittonS's EnzoSimulation class act as an initializer for this data object, so that more advanced selections of time series objects can take place when handling Enzo data.

### libconfig

I've been spending some time looking at alternate configuration methods. This week I spent some time working with Enzo's problem generation, which is in need of an overhaul, and attempting to expose it to yt to enable rapid construction of new initial conditions. I believe that Enzo may end up moving to libconfig-generatedconfiguration files, but the outcome of that is not certain. In order to future-proof yt I have included a modified version of libconfig and a wrapper around it.

This is in need of testing! If you would like to test it, and you are using the development branch of the code, open up yt/utilities/setup.py and change INSTALL_LIBCONFIG_WRAPPER = 0 to INSTALL_LIBCONFIG_WRAPPER = 1 and try re-building yt. If this works, we're in the clear -- and I'd very much like to hear about it, so drop me a line!

### Tab-completion

I have added tab-completion for field names in iyt. This means if you do:

pf = load('DD0040/DD0040')
dd = pf.h.all_data()
dd['


and then hit tab, it will pop up a list of all possible fields that you can access. It only works for data objects as of right now.

### Documentation

I've rebuilt the documentation a couple times this week, to add a table of contents for the field list, to update the changelog and to add a section on how to load data. The section on loading data is under "Analyzing Data" and is designed to help people get started with yt if they don't know how to get started with getting their data in. It also includes a list of caveats for the various codes that are supported.

I also realized that the section on "How to Make Plots" was pretty empty, so I filled that in with information and links to the various methods of PlotCollection.

That about covers it. See you next week!

Author: Matthew Turk
Published on: Jan 31, 2011, 10:53:10 AM
Permalink - Source code

## yt development: 2.0, Cython, and physics module wrapping

This is the second blog entry in the weekly series, with some updates on what took place last week with respect to yt development. One of the more exciting things is the final one, which is the start of what I want to focus on for the next couple months or years: integration of physics modules with analysis code, and then the ultimate inversion of that relationship.

### yt-2.0

This week saw the release of yt 2.0, which has the most prominent feature of being a completely re-organized code base, ready for expansion, with clearer entry points for modification, and a faster import time. The "stable" branch was updated to this reorganized codebase, and so all new installations should have this same basic layout. The new layout looks something like this:

yt/
yt/data_objects
yt/utilities
yt/frontends
yt/gui
yt/visualization
yt/analysis_modules


Each of these contains python files, possibly submodules, and so on. But in contrast with the old layout, where things were named lagos and raven and so on, I think it's safe to say we've traded whimsy for utility.

### Cython

(If you'll forgive me, this is taken from an email I wrote to yt-users.)

Cython is a Python-like dialect that compiles down to C code. It understands NumPy arrays and Python objects natively, and can produce code that operates at speeds competitive with raw, hand-written C code. We use it in yt to write things like the volume renderer, fast interpolation, level set identification, PNG writing, and so on. You can see most of the Cython code in yt/utilities/_amr_utils/*.pyx .

In the past, the Cython code was converted to C before being put into the repository. This is visible in yt/utilities/amr_utils.c. If you take a second to open this up, you'll note a couple things about it. The first is that it's generated code: notoriously awful to read, and 100% impossible to maintain. That's why it's never touched directly, and only generated by Cython. But, the problem with that is that if we want to make small changes, they cascade into ridiculously long changesets and diffs, which end up growing the size of the mercurial repository by far more than is appropriate.

To get around this, I have added an install-time dependency on the Cython package. To ensure that this will cause no problems during the transition, installation of Cython was added to the install script a while ago, and I have additionally added a check to setup.py. If Cython is not found, it should install it. The only cases where this should cause a problem are those where yt was installed with elevated (sudo) permissions. Now, every time yt is rebuilt, the C code that was previously in yt/utilities/amr_utils.c is regenerated from the Cython code in yt/utilities/_amr_utils/*.pyx. This means that we no longer need to update amr_utils.c in the mercurial repository.

Adding Cython brings with it a number of interesting things we can do; these include much faster iteration on improvements to, say, the volume renderer. Additionally, there are some other very interesting things one can do with the speed improvements Cython brings, which hopefully we can explore in the future.

(Here ends the email extraction)

One possible place to explore using Cython is with writing on-the-fly faster, single-cell kernels. I have been thinking about this, for things like calculating flux across isocontours, which may need to be redefined on the fly inside a running session. This could be very useful for looking at the collapse-state of star forming clumps in a calculation. Another would be the subject of the next section, which is generating interfaces to physics modules.

### Physics Module Wrapping

As a bit of background, one of the goals I am attempting to reach is to be able to run individual physics modules (chemistry, cooling, possibly even hydrodynamics) on a given set of cells; this will enable further exploration of things like the cooling time, the chemical timescales and so on. More importantly, it's the first step toward being able to run an actual simulation inside yt, which is the goal of this project over the next several years. More relevantly, I am planning to further develop the Enzo chemistry solvers, and this is necessary to do so in a rapid manner. A quick dig through the early stages of yt will reveal that this worked, once upon a time, with the older Enzo solvers; however, in order to drive forward my near-term solver development I will not be able to use those wrappers.

This last week I spent some time attempting to write a semi-general wrapper for Enzo's physics solvers (in this case, the chemistry solver) that could be called from Python. I ran into several difficulties, which I will chronicle here, but I believe I have come upon a solution; unfortunately it is a solution that may take longer to implement than I had hoped.

In the past, before the development of the branch of Enzo that would become 2.0, the construction of wrappers around fortran modules was much simpler. In the past, the code had been slightly more sanitized than it is now (comments in F77 were more uniformly used -- "c" instead of "!", few end-of-line, running over 80 characters, and so on) in the solvers. Additionally, the transition to Enzo 2.0 brought with it uniform 64-bit aware code; for Fortran modules, this relies on "--fdefault-real-8" in the GNU compilers, and "-r8" in the Intel compilers. This promotes all "REAL" variables to be of higher precision within the Fortran code.

f2py is a module that comes with NumPy, and which can parse and generate wrappers for Fortran code. However, the default promotion presents a difficulty, and I think that we cannot use f2py for this. I found the next-generation f2py to pose a similar problem, and I was unable to get fwrap, the new Cython-using wrapper into a working state. I think that direct wrapping of Fortran is outside the scope of my near-term goals. However, what I was able to locate was a piece of software called cppheader, which is based on PLY, that is able to parse in a general-fashion the header files from C++. I believe that I can use this to construct general wrappers using Cython for the entire Grid object, thus exposing the Fortran solvers indirectly. I have a set of hand-constructed wrappers that I wrote last Summer, but a generative method will be easier to maintain. This will remove the complexity that arises from the precision handling in Enzo. Constructing wrappers for other simulation platforms will likely take a similar form.

That wraps it up for this week.

Author: Matthew Turk
Published on: Jan 24, 2011, 1:10:50 PM
Permalink - Source code

## yt development: star particle rendering, simple merger trees and documentation

This is the first of a new series of "what's up with yt" blog posts I'm going to be writing. By keeping this log, I hope that maybe some things that would otherwise get lost in the version control changesets will get brought to greater light. This covers the time period of the first couple weeks in January.

### Star Particle Rendering

On the mailing list, the question of adding star particles to a volume rendering was raised. The question was mostly related to the idea of adding an annotation &mdash; if you have a star particle somewhere in the simulation, for instance in galactic star formation, you likely want to annotate that somehow. After a bit of discussion, the question was raised about using star particles in the volume rendering as sources of emission; could star particles contribute to the overall image? I consulted with a couple people and then added a first-pass implementation to add them in to the brick casting. Each star particle is defined as a Gaussian, and all star particles within the 99th percentile of the center of a cell are sampled and added on to the emission of a cell during ray-casting.

This can be an expensive operation, and it comes with a few important caveats. The first is that right now, you are not necessarily going to sample the centroid of the Gaussian, and so this simply may not work correctly unless you have many, many star particles clustered together. I hope to eliminate this problem by moving to an integration method that is aware of the underlying function (erf / gaussian) rather than a pure sampling method and rectangular integration. The second caveat is that this cannot, in its current form, be used for radiation transport, as the method is not conservative of flux from star particles: even relative luminosities from star particles cannot be trusted. This is currently a pure-visualization algorithm. The solution to the second issue is not clear to me yet.

To use this, you will need to re-Cythonize your yt/utilities/amr_utils.pyx file. Then rebuild your extensions. As of right now, there are no real convenience functions for rendering star particles, so the following relatively verbose code has to be used to add them appropriately to the volume renderer. This assumed you have created a camera object.

import yt.utilities.amr_utils as au

si = (dd["creation_time"] > 0)
x = dd["particle_position_x"][si][::10]
y = dd["particle_position_y"][si][::10]
z = dd["particle_position_z"][si][::10]
age = dd["ParticleAge"][si]
age = (age - age.min())/(age.max() - age.min())
cc = iw.map_to_colors(age, "hot").squeeze()[:,:3].astype(
"float64") / 255.0

skc = au.star_kdtree_container()
skc.add_points(x, y, z, cc)
skc.coeff = 1e-5
skc.sigma = 0.5 * pf.h.get_smallest_dx()
cam.volume.initialize_source()
for b in cam.volume.bricks: b.set_star_tree(skc)
`

Now when you render, it should add the star particles. In the future, once this feature leaves beta, this will be much easier to do. For a complete working example, see http://paste.enzotools.org/show/1482/ .

Here's an example image of just star particles being rendered. It's fuzzy and doesn't look like much yet, but it shows the basic concepts.

### Simple Merger Tree

Enzo, as of 2.0, can now output Friends Of Friends catalogs on the fly during a simulation, along with particle IDs of particles belonging to those halos. Identifying mergers between the halos found with this method was not implemented, so last summer I wrote a small script to do so. I've added it to yt/analysis_modules/halo_merger_tree/enzofof_merger_tree.py. You can access it by amods.halo_merger_tree.EnzoFofMergerTree. This function accepts two ID numbers and loads the corresponding catalogs. It then calculates the parentage and childhood percentages of each pair of halos that it can identify.

This means that you get back a set of tuples, describing the percentage of particles from one halo that went to make up a child halo, as well as the percentage of that child halo's particles that came from a particular parent halo. JohnW then extended this to output a graphviz plot of how the halos merged and formed over time. He also added the ability to avoid halos that are too small and a couple other feature additions.

It's a very simple system, but it works well for the EnzoFOF calculations. In the future, I see it acting as a supplement to StephenS's more complicated and capable merger tree identification system.

### Documentation

This week my yt development was mostly focused on getting the yt documentation brought up to the current state. To that end, I re-organized it to fit into a better conceptual flow, added in the orientation section I wrote last December, and asked for feedback from a couple people. JeffO in particular gave extremely helpful comments about the structure and flow. I think that overall it's much better. StephenS also updated his docs to reflect the new import scheme. The documentation is now de-coupled from the version control repository. This should help keep them up to date, as we no longer have to execute a build and check that build in to update the online documentation. With this change, I think we're now ready for release, so hopefully sometime today that will go out.

### Maestro

ChrisM and JeffO committed and pushed changes to add MAESTRO support to yt. This is very exciting, as it's another BoxLib code that seems to be supported without a lot of engine rewriting. Hopefully this will be useful to a number of people.

Author: Matthew Turk
Published on: Jan 17, 2011, 11:27:00 AM
Permalink - Source code

## AMR kd-Tree rendering added to yt

After a significant amount of development and restructuring, I have added the AMR kd-Tree rendering framework to yt. There are several posts on this blog about this module already, so I won't go over all the background information again. Here I'd like to showcase some of the recent successes and capabilities of the volume rendering within yt.

New optimization options:There are a few important additions that have made it possible to render some of the largest AMR simulations we have available. The first is parallel kd-tree construction. The kd-tree is constructed by first building to the point where there are N kd-Tree nodes. At that point, each processor is assigned a single node to continue building their subtree. This volume decomposes the kd-Tree construction process. If requested, the tree can then be combined through an mpi all-to-all like process.

The next optimization comes in through the type of ray casting. For the largest AMR simulations, it can take time to reduce the kd-Tree once it has been constructed in parallel. To counter this, we have added a 'domain' traversal such that each processor only renders it's subtree. This means that all the data stays local to a processor independent of viewpoint or any other camera parameters. For datasets such as the 512L7 'Santa Fe Lightcone' simulation, this turned out to be very important.

Finally, when the data reading dominates the rendering process, we have added an option in the camera object called no_ghost=True/False. When True, instead of interpolating the vertex centered data from the ghost zones (which requires substantial data reading from neighbors), it extrapolates out to the vertices using the grid's own data. While less accurate, when using smooth, broad, transfer functions or when just getting an initial look at the data, this provides a substantial speedup.

Showcase examples: For each of these, I am selecting a slice of the volume since there are so many objects. Therefore the depth of each of these are 0.2 in units of the box length.

The first example is a 256^3 root grid with 5 levels of AMR, for a total of 48339 grids. Using an 8-core Mac Pro 2.93 GHz desktop, we were able to render a 4096^2 image in just over 100 seconds. kd-Tree construction:21 seconds data reading: 8 secondsray casting: 81 seconds Here is a zoom in of roughly 1/4 of the image:

The second example is a 512^3 root grid + 7 levels of AMR, and is also known as the Santa Fe Lightcone. This simulation has 392,865 grids. For this simulation, we used 256 mpi tasks running on 1024 total cores of TACC Ranger (4way1024). Here the parallel kd-Tree construction really shined, as shown in the timing stats. The image was also 4096^2.

• kd-Tree construction: 6 seconds
• data reading: 100 seconds
• ray casting: 50 seconds

Here are a couple shots, the first showing the entire field of view, and the second showing full resolution in a small portion of the simulation.

I'm pretty happy with the performance and results. Since the data doesn't have to be moved around in between frames, I'm hoping to make some great movies of these large datasets exploring the rich interactions and features that are undoubtably hiding in the data.

If you give this new renderer a shot and have any impressions or questions, let me know!

Author: Sam Skillman
Published on: Nov 9, 2010, 9:32:03 PM
Permalink - Source code