The Rockstar Halo Finder in yt

Over the last few weeks, Matt Turk, Christopher Moody, and Stephen Skory have been working to improve the integration of the Rockstar halo finder in yt. Rockstar was written primarily by Peter Behroozi and has a main website here. Linked there is the source and the most current edition of the method paper which includes a timing and scaling study.

Rockstar is a six dimensional halo finder, meaning that it considers both particle position and momentum when locating dark matter halos. It is also capable of locating bound substructure in halos and producing a detailed merger tree. As of this writing its main deficit is that it cannot handle simulations with varying particle mass. This means that in simulations that include star particles, the star particles must be excluded for the purposes of halo finding. Also, Rockstar cannot analyze "zoom-in" or "nested" simulations with various values of dark matter particle mass.

Here is a brief list of the main improvements:

The full documentation on how to run Rockstar is available in the yt documentation.

Examples of Substructure Location

One of the compelling features of Rockstar is the ability to identify bound substructure of halos. Below are two images showing the halos identified by HOP and Rockstar over-plotted on a projection of gas density. Note that the circles mean different things in the two cases. In the case of HOP, the circles show the radius from the center of mass to the most distant particle, while for Rockstar it is from the center of mass to the calculated virial radius.

Paying attention to the central region of the halo, notice how Rockstar identifies the small in-falling subhalos that HOP doesn't. This is not surprising because HOP is not designed to detect substructure.

HOP:

HOP Output

Rockstar:

Rockstar Output

Note that In the Rockstar image, the halos on the periphery are not encircled due to the way the image was prepared.

Author: Stephen Skory
Published on: Nov 26, 2012, 8:22:16 PM
Permalink - Source code

What's Up With yt 3.0?

This is a long blog post! The short of it is:

As with all of yt, 3.0 is being developed completely in the open. We're testing using JIRA at http://yt-project.atlassian.net/ for tracking progress. The main development repository is at https://bitbucket.org/yt_analysis/yt-3.0 and discussions have been ongoing on the yt-dev mailing list. Down below you can find some contributor ideas and information!

Why 3.0?

The very first pieces of yt to be written are now a bit over six years old. When it started, it was a very simple Python wrapper around pyHDF, designed to make slices, then export those slices to ASCII where they were plotted by a plotting package called HippoDraw. It grew a bit to include projections and sphere selection over the course of a few months, and eventually became the community project it is today.

But, despite those initial steps being a relatively long time ago, there are still many vestiges in yt. For instance, the output of print_stats on an AMR hierarchy object is largely unchanged since that time.

Most importantly, however, is that yt needs to continue to adapt to best serve analysis and visualization needs in the community. To do that, yt 3.0 has been conceived as a project to rethink some of the basic assumptions and principles in yt. In doing so, we will be able to support new codes of different types, larger datasets, and most importantly enable us to grow the community of users and developers. In many ways, the developments in yt 3.0 will serve to clarify and simply the code base, but without sacrificing speed or memory. By changing the version number from 2.X to 3.0, we also send the signal that things may not work the same way -- and in fact, there may be API incompatibilities along the way. But they won't be changed without need, and we're trying to reduce disruption as much as possible.

yt 3.0 is designed to allow support for non-cartesian coordinates, non-grid data (SPH, unstructured mesh), and to remove many of the "Enzo-isms" that populate the code base. This brings with it a lot of work, but also a lot of opportunity.

If you have ideas, concerns or comments, email yt-dev!

What's Going In To 3.0?

We've slated a large number of items to be put into 3.0, as well as a large number of system rewrites. By approaching this piecemeal, we hope to address one feature or system at a time so that the code can remain in a usable state.

Geometry selection

In the 2.X series, all geometric selection (spheres, regions, disks) is conducted by looking first at grids, then points, and choosing which items go in. This also involves a large amount of numpy array concatenation, which isn't terribly good for memory.

The geometry selection routines have all been rewritten in Cython. Each geometric selection routine implements a selection method for grids and points. This allows non-grid based codes (such as particle-only codes) to use the same routines without a speed penalty. These routines all live inside yt/geometry/selection_routines.pyx, and adding a new routine is relatively straightforward.

The other main change with how geometry is handled is that data objects no longer know how the data is laid out on disk or in memory. In the past, data objects all had a _grids attribute. But, in 3.0, this can no longer be relied upon -- because we don't want all the data formats to have grids! Data is now laid out in format-neutral "chunks," which are designed to support selection based on spatial locality, IO convenience, or another arbitrary method. This allows the new GeometryHandler class to define how data should be read in off disk, and it reduces the burden on the data objects to understand how to best access data.

For instance, the GridGeometryHandler understands how to batch grid IO for best performance and how to feed that to the code-specific IO handler to request fields. This new method allows data objects to specifically request particular fields, understand which fields are being generated, and most importantly not need to know anything about how data is being read off disk.

It also allows dependencies for derived fields to be calculated before any IO is read off disk. Presently, if the field VelocityMagnitude is requested of a data object, the data object will read the three fields x-velocity, y-velocity and z-velocity (or their frontend-specific aliases -- see below for discussion of "Enzo-isms") independently. The new system allows these to be read in bulk, which cuts by a third the number of trips to the disk, and potentially reduces the cost of generating the field considerably.

Finally, it allows data objects to expose different chunking mechanisms, which simplifies parallelism and allows parallel analysis to respect a single, unified interface.

Geometry selection is probably the biggest change in 3.0, and the one that will enable yt to read particle codes in the same way it reads grid codes.

Removing Enzo-isms

yt was originally designed to read Enzo data. It wasn't until Jeff Oishi joined the project that we thought about expanding it beyond Enzo, to the code Orion, and at the time it was decided that we'd alias fields and parameters from Orion to the corresponding field names and parameters in Enzo. The Orion fields and parameters would still be available, but the canonical mechanism for referring to them from the perspective of derived fields would be the Enzo notation.

When we developed yt 2.0, we worked hard to remove many of the Enzo-isms from the parameter part of the system: instead of accessing items like pf["HubbleConstantNow"] (a clear Enzo-ism, with the problem that it's also not tab completable) we changed to accessing explicitly accessing pf.hubble_constant.

But the fields were still Enzo-isms: Density, Temperature, etc. For 3.0, we decided this will change. The standard for fields used in yt is still under discussion, but we are moving towards following PEP-8 like standards, with lowercase and underscores, and going with explicit field names over implicit field names. Enzo fields will be translated to this (but of course still accessible in the old way) and all derived fields will use this naming scheme.

Non-Cartesian Coordinates

From its inception, yt has only supported cartesian coordinates explicitly. There are surprisingly few places that this becomes directly important: the volume traversal, a few fields that describe field volumes, and the pixelizer routines.

Thanks to hard work by Anthony Scopatz and John ZuHone, we have now abstracted out most of these items. This work is still ongoing, but we have implemented a few of the basic items necessary to provide full support for cylindrical, polar and spherical coordinates. Below is a slice through a polar disk simulation, rendered with yt.

/attachments/cylindrical_pixelizer.png

Unit Handling and Parameter Access

Units in yt have always been in cgs, but we would like to make it easier to convert fields and lengths. The first step in this direction is to use Casey Stark's project dimensionful ( http://caseywstark.com/blog/2012/code-release-dimensionful/ ). This project is ambitious and uses the package SymPy ( http://sympy.org ) for manipulating symbols and units, and it seems ideal for our use case. Fields will now carry with them units, and we will ensure that they are correctly propagated.

Related to this is how to access parameters. In the past, parameter files (pf) have been overloaded to provide dict-like access to parameters. This was degenerate with accessing units and conversion factors. In 3.0, you will need to explicitly access pf.parameters to access them.

Multi-Fluid and Multi-Particle Support

In yt 3.0, we want to be able to support simulations with separate populations of fluids and particles. As an example, in many cosmology simulations, both dark matter and stars are simulated. As it stands in yt 2.X, separating the two for analysis requires selecting the entire set of all particles and discarding those particles not part of the population of interest. Some simulation codes allow for subselecting particles in advance, but the means of addressing different particle types was never clear. For instance, it's not ideal to create new derived fields for each type of particle -- we want to re-use derived field definitions between particle types.

Some codes, such as Piernik (the code Kacper Kowalik, one of the yt developers, uses) also have support for multiple fluids. There's currently no clear way to address different types of fluid, and this suffers from the same issue the particles do.

In 3.0, fields are now specified by two characteristics, both of which have a default, which means you don't have to change anything if you don't have a multi-fluid or multi-particle simulation. But if you do, you can now access particles and fluids like this:

sp = pf.h.sphere("max", (10.0, 'kpc'))
total_star_mass = sp["Star", "ParticleMassMsun"].sum()

Furthermore, these field definitions can be accessed anywhere that allows a field definition:

sp = pf.h.sphere("max", (10.0, 'kpc'))
total_star_mass = sp.quantities["TotalQuantity"](("Star", "ParticleMassMsun"))

For codes that do allow easy subselection (like the sometime-in-the-future Enzo 3.0) this will also insert the selection of particle types directly in the IO frontend, preventing unnecessary reads or allocations of memory.

By using multiple fluids directly, we can define fields for angular momentum, mass and so on only once, but apply them to different fluids and particle types.

Supporting SPH and Octree Directly

One of the primary goals that this has all been designed around is supporting non-grid codes natively. This means reading Octree data directly, without the costly step of regridding it, as is done in 2.X. Octree data will be regarded as Octrees, rather than patches with cells in them. This can be seen in the RAMSES frontend and the yt/geometry/oct_container.pyx file, where the support for querying and manipulating Octrees can be found.

A similar approach is being taken with SPH data. However, as many of the core yt developers are not SPH simulators, we have enlisted people from the SPH community for help in this. We have implemented particle selection code (using Octrees for lookups) and are already able to perform limited quantitative analysis on those particles, but the next phase of using information about the spatial extent of particles is still to come. This is an exciting area, and one that requires careful thought and development.

How Far Along Is It?

Many of the items above are still in their infancy. However, several are already working. As it stands, RAMSES can be read and analyzed directly, but not volume rendered. The basics of reading SPH particles and quickly accessing them are done, but they are not yet able to be regarded as a fluid with spatial extent or visualized in a spatial manner. Geometry selection is largely done with the exception of boolean objects and covering grids. Units are still in their infancy, but the removal of Enzo-isms has begun. Finally, non-cartesian coordinates are somewhat but not completely functional; FLASH cylindrical datasets should be available, but they require some work to properly analyze still.

Why Would I Want To Use It?

The best part of many of these changes is that they're under the hood. But they also provide for cleaner scripts and a reduction in the effort to get started. And many of these improvements carry with them substantial speedups.

For example, reading a large data region off disk from an Enzo dataset is now nearly 50% faster than in 2.X, and the memory overhead is considerably lower (as we get rid of many intermediate allocations.) Using yt to analyze Octree data such as RAMSES and NMSU-ART is much more straightforward, and it requires no costly regridding step.

Perhaps the best reason to want to move to 3.0 is that it's going to be the primary line of development. Eventually 2.X will be retired, and hopefully the support of Octree and SPH code will help grow the community and bring new ideas and insight.

How Can I Help?

The first thing you can do is try it out! If you clone it from http://bitbucket.org/yt_analysis/yt-3.0 you can build it and test it. Many operations on patch based AMR will work (in fact, we run the testing suite on 3.0, and as of right now only covering grid tests fail) and you can also load up RAMSES data and project, slice, and analyze it.

If you run into any problems, please report them to either yt-users or yt-dev! And if you want to contribute, whether that be in the form of brainstorming, telling us your ideas about how to do things, or even contributing code and effort, please stop by either the #yt channel on chat.freenode.org or yt-dev, where we can start a conversation about how to proceed.

Thanks for making it all the way down -- 3.0 is the future of yt, and I hope to continue sharing new developments and status reports.

Author: Matthew Turk
Published on: Nov 15, 2012, 9:05:33 PM
Permalink - Source code

Our New Blog

Hi everyone! Welcome to the new yt Project blog. We've gotten rid of the old Posterous-based blog in favor of making it easier to include code, entries from anybody in the community, and to overall make it easier and clearer how to contribute.

So, to that end, we've moved to using a combination of pretty cool technologies to make it easy to blog and have your entry added to the blog.

For the blogging itself, we use Blohg, which is a mercurial-backed system. So all the blog entries are stored in a mercurial repository, on BitBucket (yt_analysis/blog) and instead of being in HTML or something, they're written in ReStructured Text (ReST) -- which is the same format that the yt docstrings and documentation are all written in. We're standardizing on ReST, which means to contribute to any of yt, you only have to learn one way to format your text. (Plus, ReST is super easy.)

To add a new entry, you just have to fork the blog repository and then issue a Pull Request. You can add the entry by creating a new file in the directory content/post, and it'll automatically show up with your name and the time you added it. Once your pull request is accepted, the blog will be automatically rebuilt and uploaded to the blog site (thanks to Shining Panda, which we use for our testing suite -- more on that later!) which lives inside Amazon's cloud.

But the best part is that this is all hidden behind the scenes. For all intents and purposes, you just need to add your text, issue a pull request, and it'll show up in a few minutes.

But here's the best part -- by converting to this system, we've also made it easy to include code samples using the IPython Notebook. A bunch of the yt developers have started using the IPython notebook for basically everything -- analysis, teaching, sharing snippets -- and we want to keep using it for everything. (If you take a look over at https://hub.yt-project.org/ you can see that we've started uploading Notebooks to the yt Data Hub, which then get displayed by the amazing NBViewer project by the IPython developers.) So, we made it easy to include a notebook here in the blog.

To include the notebook, you'll first need a copy of the NBConvert repository, which will also need to be in your PYTHONPATH. You may also need to install the "pandoc" project, but that's usually included in most Linux distributions and can be gotten with MacPorts. Once you've added that, just cd to the blohg directory and run::

python2.7 blohg_converter.py /path/to/your/notebook.ipynb

This will grab all the images and put them in the right directories inside the blog repository, add a new .rst file, and then you're set to go. Just run hg ci -A and you're good to go!

Because this blog is a bit new, we're still working through some kinks. Already as I've made a couple changes, the RSS feed has marked itself as completely updated; this is an error, so I'm trying to figure out what's going on and fix it up. So I apologize in advance if any other minor glitches happen along the way!

With this change in the blogging system, I think we've lowered the barrier to sharing with the community changes in yt, new features, and even showing old features using the Notebook. I'm really optimistic

And if you have something you would like to share -- a new paper you've written, something cool you've done (even if not in yt!) or anything else, go ahead and fork the repository and write up a blog post -- everything you need comes in the box!

Author: Matthew Turk
Published on: Nov 4, 2012, 10:09:04 PM
Permalink - Source code

Simple Grid Refinement.ipynb

Notebook Download

Grid refinement

In yt, you can now generate very simple initial conditions:

In[1]:

from yt.mods import *
from yt.frontends.stream.api import load_uniform_grid
from yt.frontends.gdf.api import *
from yt.utilities.grid_data_format.writer import write_to_gdf

class DataModifier(object):
    pass

class TophatSphere(DataModifier):
    def __init__(self, fields, radius, center):
        self.fields = fields
        self.radius = radius
        self.center = center

    def apply(self, grid, container):
        r = ((grid['x'] - self.center[0])**2.0
         +   (grid['y'] - self.center[1])**2.0
         +   (grid['z'] - self.center[2])**2.0)**0.5
        for field in self.fields:
            grid[field][r < self.radius] = self.fields[field]

data = na.random.random((256, 256, 256))
ug = load_uniform_grid({'Density': data}, [256, 256, 256], 1.0)
yt : [INFO     ] 2012-10-30 18:11:48,715 Loading plugins from /home/mturk/.yt/my_plugins.py
yt : [INFO     ] 2012-10-30 18:11:49,025 Parameters: current_time              = 0.0
yt : [INFO     ] 2012-10-30 18:11:49,026 Parameters: domain_dimensions         = [256 256 256]
yt : [INFO     ] 2012-10-30 18:11:49,026 Parameters: domain_left_edge          = [ 0.  0.  0.]
yt : [INFO     ] 2012-10-30 18:11:49,027 Parameters: domain_right_edge         = [ 1.  1.  1.]
yt : [INFO     ] 2012-10-30 18:11:49,028 Parameters: cosmological_simulation   = 0.0
yt : [INFO     ] 2012-10-30 18:11:49,028 Parameters: current_time              = 0.0
yt : [INFO     ] 2012-10-30 18:11:49,028 Parameters: domain_dimensions         = [256 256 256]
yt : [INFO     ] 2012-10-30 18:11:49,029 Parameters: domain_left_edge          = [ 0.  0.  0.]
yt : [INFO     ] 2012-10-30 18:11:49,029 Parameters: domain_right_edge         = [ 1.  1.  1.]
yt : [INFO     ] 2012-10-30 18:11:49,030 Parameters: cosmological_simulation   = 0.0

In[2]:

spheres = []
spheres.append(TophatSphere({"Density": 2.0}, 0.1, [0.2,0.3,0.4]))
spheres.append(TophatSphere({"Density": 20.0}, 0.05, [0.7,0.4,0.75]))
for sp in spheres: sp.apply(ug.h.grids[0], ug)
yt : [INFO     ] 2012-10-30 18:11:49,035 Adding Density to list of fields

In[3]:

p = ProjectionPlot(ug, "x", "Density")
p.show()
Initializing tree  0 /  0  0% |                               | ETA:  --:--:--
Initializing tree  0 /  0100% ||||||||||||||||||||||||||||||||| Time: 00:00:00
Projecting  level  0 /  0   0% |                              | ETA:  --:--:--
Projecting  level  0 /  0 100% |||||||||||||||||||||||||||||||| Time: 00:00:01
yt : [INFO     ] 2012-10-30 18:11:53,889 Projection completed
yt : [INFO     ] 2012-10-30 18:11:53,894 xlim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:53,894 ylim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:53,895 Making a fixed resolution buffer of (Density) 800 by 800
yt : [INFO     ] 2012-10-30 18:11:53,914 xlim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:53,915 ylim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:53,915 Making a fixed resolution buffer of (Density) 800 by 800
yt : [INFO     ] 2012-10-30 18:11:53,934 Making a fixed resolution buffer of (Density) 800 by 800
/attachments/Simple_Grid_Refinement_files/Simple_Grid_Refinement_fig_00.png

We can even save them out to disk!

In[4]:

!rm /home/mturk/test.gdf

In[5]:

write_to_gdf(ug, "/home/mturk/test.gdf")

In[6]:

pf = GDFStaticOutput("/home/mturk/test.gdf")
yt : [INFO     ] 2012-10-30 18:11:56,370 Parameters: current_time              = 0.0
yt : [INFO     ] 2012-10-30 18:11:56,371 Parameters: domain_dimensions         = [256 256 256]
yt : [INFO     ] 2012-10-30 18:11:56,371 Parameters: domain_left_edge          = [ 0.  0.  0.]
yt : [INFO     ] 2012-10-30 18:11:56,372 Parameters: domain_right_edge         = [ 1.  1.  1.]
yt : [INFO     ] 2012-10-30 18:11:56,373 Parameters: cosmological_simulation   = 0.0

In[7]:

p2 = ProjectionPlot(pf, "x", "Density")
p2.show()
Initializing tree  0 /  0  0% |                               | ETA:  --:--:--
Initializing tree  0 /  0100% ||||||||||||||||||||||||||||||||| Time: 00:00:00
Projecting  level  0 /  0   0% |                              | ETA:  --:--:--
Projecting  level  0 /  0 100% |||||||||||||||||||||||||||||||| Time: 00:00:01
yt : [INFO     ] 2012-10-30 18:11:57,908 Projection completed
yt : [INFO     ] 2012-10-30 18:11:57,914 xlim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:57,914 ylim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:57,915 Making a fixed resolution buffer of (Density) 800 by 800
yt : [INFO     ] 2012-10-30 18:11:57,934 xlim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:57,935 ylim = 0.000000 1.000000
yt : [INFO     ] 2012-10-30 18:11:57,935 Making a fixed resolution buffer of (Density) 800 by 800
yt : [INFO     ] 2012-10-30 18:11:57,954 Making a fixed resolution buffer of (Density) 800 by 800
/attachments/Simple_Grid_Refinement_files/Simple_Grid_Refinement_fig_01.png

Over time, this functionality will expand to include cell-flagging, refinement, and much more interesting modifications to grid values.

Author: Matthew Turk
Published on: Oct 30, 2012, 10:16:54 PM
Permalink - Source code

yt 2.4 released!

We’re proud to announce the release of version 2.4 of the yt Project, http://yt-project.org/ . The new version includes many new features, refinements of existing features and numerous bugfixes. We encourage all users to upgrade to take advantage of the changes.

yt is a community-developed analysis and visualization toolkit, primarily directed at astrophysical hydrodynamics simulations. It provides full support for output from the Enzo, FLASH, Orion, and Nyx codes, with preliminary support for several others. It provides access to simulation data using an intuitive python interface, can perform many common visualization tasks and offers a framework for conducting data reductions and analysis of simulation data.

The most visible changes with the 2.4 release include:

For a complete list of changes in this release, please visit the Changelog ( http://yt-project.org/docs/2.4/changelog.html ). Information about the yt project, including installation instructions, can be found on the homepage: http://yt-project.org/

Author: Matthew Turk
Published on: Aug 3, 2012, 7:43:40 AM
Permalink - Source code

yt Google+ Hangout tomorrow!

Tomorrow we're going to try something new -- Google Hangouts! If you'd like help with something, to share some feedback, or just to say hi to other community members, stop by Tuesday, May 1st. We'll be starting up around 2PM Eastern and continuing for a couple hours.

If this works out, we'll try it again from time to time, to catch up on new developments, help out with scripts or visualization issues, soliciting feedback, and to chat about using and developing yt.

You'll find the Hangout on the yt Google Plus page.

Author: Matthew Turk
Published on: Apr 30, 2012, 9:42:04 AM
Permalink - Source code

What's new with yt?

Now that the post-workshop preparations and work have settled down, I thought it might be interesting to share some of the developments going on with yt. We're still a long way from a new release, so these interim 'development' updates are meant to be a bit of a teaser. As always, these features are either in the main branch or (if noted) in a public fork on BitBucket. If they sound interesting, drop us a line on `yt-dev <http://lists.spacepope.org/listinfo.cgi /yt-dev-spacepope.org>`_ to see about getting involved!

Stephen has been pushing lately for more consistency in the code base -- indentation, naming conventions, spaces, and so on. Specifically, he has been suggesting we follow PEP-8, which is a standard for Python source code. This has gotten a lot of support, and so we're encouraging this in new commits and looking into mechanisms for updating old code. (Although it can cause some tricky merges, so we're trying to take it easy for a bit!)

JohnZ recently added a particle trajectory mechanism, for correlating particles between outputs and following them. This lets you see where they go and the character of the gas they pass through.

Sam has been looking at improving volume rendering, including adding hard surfaces and a much faster (Cythonized) kD-tree routine. The initial hard surface stuff looks just great. (This is all taking place in his fork.) This code is also threaded, so it should run much faster on multi-core machines.

JohnW identified a bug in the ghost zone generation, which has resulted in a big speedup for generating ghost zones!

Chris has been trying to get the regridding process for ART to be substantially faster, which he's been having success with. We're now trying to together work on changing how 'child masking' is thought of; with patch-based codes it only masks those cells where data at finer levels is available. We're trying to make it so that it also marks where coarser data is the finest available, which should help out with speed for octree based codes.

Finally, I've been up to working on geometric selection. My hope is that by rethinking how we think about geometry in yt and removing a number of intermediate steps, we can avoid creating a whole bunch of temporary arrays and overall speed up the process (and add better support for non-patch based codes!). Results so far have been pretty good, but it's a long ways from being ready. It's in my refactor fork.

There are a lot of exciting things going on, so keep your eyes on this space! In addition to all of these things, we've got web interactors for isolated routines, an all-new hub, improvements to reason, and tons of other stuff. As always, drop by yt-dev or the IRC channel if you'd like to get involved.

Author: Matthew Turk
Published on: Feb 13, 2012, 9:10:21 AM
Permalink - Source code

yt workshop 2012: a success!

The yt workshop last week in Chicago ( http://yt-project.org/workshop2012/ ) was an enormous success. On behalf of the organizing and technical committees, I'd like to specifically thank the FLASH Center, particularly Don Lamb, Mila Kuntu, Carrie Eder, for their hospitality; the venue was outstanding and their hospitality touching. Additionally, we're very grateful to the Adler Planetarium's Doug Roberts and Mark SubbaRao for hosting us on Wednesday evening -- seeing the planetarium show as well as volume renderings made by yt users up on the dome was so much fun. The yt workshop was supported by NSF Grant 1214147. Thanks to everyone who attended -- your energy and excitement helped make it a success.

Thanks also to the organizing and technical committees: Britton Smith, John ZuHone, Brian O'Shea, Jeff Oishi, Stephen Skory, Sam Skillman, and Cameron Hummels. All talks have been recorded, and you can clone a unified repository of talk slides and worked examples:

hg clone https://bitbucket.org/yt_analysis/workshop2012/

A few photos have been put up online, too: http://goo.gl/g02uP

As I am able to edit and upload talks, they'll appear on the yt youtube channel as well as on the yt homepage: http://www.youtube.com/ytanalysis

Thanks again, and wow, what a week!

Author: Matthew Turk
Published on: Jan 30, 2012, 3:02:09 PM
Permalink - Source code

Workshop in just a week!

The first yt workshop is in just about a week. We've updated the website with the current list of talks, along with information about getting to and from the workshop from the conference hotel, and information about how to get the sample data. Keep your eyes on the website in the lead up to the workshop, as we'll be posting a script for fisheye lens renderings for our viz night at the Adler, information about the talks and example scripts, and other useful info. Once the workshop is over we'll update with links to the full videos of the talks, the slides, and scripts.

Author: Matthew Turk
Published on: Jan 17, 2012, 12:45:43 AM
Permalink - Source code

yt Version 2.3 Announcement

Just in time for the New Year, we’re happy to announce the release of yt version 2.3! ( http://yt-project.org/ ) The new version includes many new modules and enhancements, and the usual set of bug fixes over the last point release. We encourage all users to upgrade to take advantage of the changes.

yt is a community-developed analysis and visualization toolkit for astrophysical simulation data. yt provides full support for Enzo, Orion, Nyx, and FLASH codes, with preliminary support for the RAMSES code (and a handful of others.) It can be used to create many common types of data products, as well as serving as a library for developing your own data reductions and processes.

Below is a non-comprehensive list of new features and enhancements:

Everything, from installation, to development, to a cookbook, can be found on the homepage: http://yt-project.org/

We have updated the libraries installed with the install script; for more information, see the “Dependencies” section of the yt docs at http://yt-project.org/doc/advanced/installing.html.

Development has been sponsored by the NSF, DOE, and various University funding. We invite you to get involved with developing and using yt!

We’re also holding the FIRST YT WORKSHOP from January 24-26 at the FLASH center in Chicago. See the workshop homepage for more information! http://yt-project.org/workshop2012/

Please forward this announcement to interested parties.

Sincerely,

The yt development team

Author: Stephen Skory
Published on: Dec 15, 2011, 4:44:00 PM
Permalink - Source code