Gaia DR2: A light version and light curves

screenshot: topcat and matplotlib
Topcat is doing datalink, and our little python script has plotted a two-color time series of RMC 18 (or so I think).

If anyone ever writes a history of the VO, the second data release of Gaia on April 25, 2018 will probably mark its coming-of-age – at least if you, like me, consider the Registry the central element of the VO. It was spectacular to view the spike of tens of Registry queries per second right around 12:00 CEST, the moment the various TAP services handing out the data made it public (with great aplomb, of course).

In GAVO’s Data Center we also carry Gaia DR2 data. Our host institute, the Zentrum für Astronomie in Heidelberg, also has a dedicated Gaia server. This gives relieves us from having to be a true mirror of the upstream data release. And since the source catalog has lots and lots of columns that most users will not be using most of the time, we figured a “light” version of the source catalog might fill an interesting ecological niche: Behold gaia.dr2light on the GAVO DC TAP service, containing essentially just the basic astrometric parameters and the diagonal of the covariance matrix.

That has two advantages: Result sets with SELECT * are a lot less unwieldy (but: just don’t do this with Gaia DR2), and, more importantly, a lighter table puts less load on the server. You see, conventional databases read entire rows when processing data, and having just 30% of the columns means we will be 3 times faster on I/O-bound tasks (assuming the same hardware, of course). Hence, and contrary to several other DR2-carrying sites, you can perform full sequential scans before timing out on our TAP service on gaia.dr2light. If, on the other hand, you need to do debugging or full-covariance-matrix error calculations: The full DR2 gaia_source table is available in many places in the VO. Just use the Registry.

Photometry via TAP

A piece of Gaia DR2 that’s not available in this form anywhere else is the lightcurves; that’s per-transit photometry in the G, BP, and RP band for about 0.5 million objects that the reduction system classified as variable. ESAC publishes these through datalink from within their gaia_source table, and what you get back is a VOTable that has the photometry in the three bands interleaved.

I figured it might be useful if that data were available in a TAP-queriable table with lightcurves in the database. And that’s how gaia.dr2epochflux came into being. In there, you have three triples of arrays: the epochs (g_transit_time, bp_obs_time, and rp_obs_time), the fluxes (g_transit_flux, bp_flux, and rp_flux), and their errors (you can probably guess their names). So, to retrieve G lightcurves where available together with a gaia_source query of your liking, you could write something like

SELECT g.*, g_transit_time, g_transit_flux
FROM gaia.dr2light AS g
LEFT OUTER JOIN gaia.dr2epochflux
USING (source_id)
WHERE ...whatever...

– the LEFT OUTER JOIN arranges things such that the g_transit_time and g_transit_flux columns simply are NULL when there are no lightcurves; with a normal (“inner”) join, rows without lightcurves would not be returned in such a query.

To give you an idea of what you can do with this, suppose you would like to discover new variable blue supergiants in the Gaia data (who knows – you might discover the precursor of the next nearby supernova!). You could start with establishing color cuts and train your favourite machine learning device on light curves of variable blue supergiants. Here’s how to get (and, for simplicity, plot) time series of stars classified as blue supergiants by Simbad for which Gaia DR2 lightcurves are available, using pyvo and a little async trick:

from matplotlib import pyplot as plt
import pyvo

def main():
  simbad = pyvo.dal.TAPService(
    "http://simbad.u-strasbg.fr:80/simbad/sim-tap")
  gavodc = pyvo.dal.TAPService("http://dc.g-vo.org/tap")

  # Get blue supergiants from Simbad
  simjob = simbad.submit_job("""
    select main_id, ra, dec
    from basic
    where otype='BlueSG*'""")
  simjob.run()

  # Get lightcurves from Gaia
  try:
    simjob.wait()
    time_series = gavodc.run_sync("""
      SELECT b.*, bp_obs_time, bp_flux, rp_obs_time, rp_flux
      FROM (SELECT
         main_id, source_id, g.ra, g.dec
         FROM 
        gaia.dr2light as g
         JOIN TAP_UPLOAD.t1 AS tc
         ON (0.002>DISTANCE(tc.ra, tc.dec, g.ra, g.dec))
      OFFSET 0) AS b
      JOIN gaia.dr2epochflux
      USING (source_id)
      """, 
      uploads={"t1": simjob.result_uri})
  finally:
    simjob.delete()

  # Now plot one after the other
  for row in time_series.table:
    plt.plot(row["bp_obs_time"], row["bp_flux"])
    plt.plot(row["rp_obs_time"], row["rp_flux"])
    plt.show(block=False)
    raw_input("{}; press return for next...".format(row["main_id"]))
    plt.cla()

if __name__=="__main__":
  main()

If you bother to read the code, you’ll notice that we transfer the Simbad result directly to the GAVO data center without first downloading it. That’s fairly boring in this case, where the table is small. But if you have a narrow pipe for one reason or another and some 105 rows, passing around async result URLs is a useful trick.

In this particular case the whole thing returns just four stars, so perhaps that’s not a terribly useful target for your learning machine. But this piece of code should get you started to where there’s more data.

You should read the column descriptions and footnotes in the query results (or from the reference URL) – this tells you how to interpret the times and how to make magnitudes from the fluxes if you must. You probably can’t hear it any more, but just in case: If you can, process fluxes rather than magnitudes from Gaia, because the errors are painful to interpret in magnitudes when the fluxes are small (try it!).

Note how the photometry data is stored in arrays in the database, and that VOTables can just transport these. The bad news is that support for manipulating arrays in ADQL is pretty much zero at this point; this means that, when you have trained your ML device, you’ll probably have to still download lots and lots of light curves rather than write some elegant ADQL to do the filtering server-side. However, I’d be highly interested to work out how some tastefully chosen user defined functions might enable offloading at least a good deal of that analysis to the database. So – if you know what you’d like to do, by all means let me know. Perhaps there’s something I can do for you.

Incidentally, I’ll talk a bit more about ADQL arrays in a blog post coming up in a few weeks (I think). Don’t miss it, subscribe to our feed).

Datalink

In the results from queries involving gaia.dr2epochflux, we also provide datalinks. These let you retrieve lightcurves that already have mags and that are more easily plotted. Perhaps more importantly, they link back to the full ESAC lightcurves that, in addition, give you a lot more debug information and are required if you want to reliably identify photometry points with the identifiers of the transits that generated them.

Datalink support in clients still is not great, but it’s growing nicely. Your ideas for workflows that should be supported are (again) most welcome – and have a good chance of being adopted. So, try things out, for instance by getting the most recent TOPCAT (as of this writing) and do the following:

  1. Open the VO/TAP dialog from the menu bar and double click the GAVO DC TAP service.
  2. Enter
    SELECT source_id, ra, dec,
    phot_bp_mean_mag, phot_rp_mean_mag, phot_g_mean_mag,
    g_transit_time, g_transit_flux,
    rp_obs_time, rp_flux
    FROM gaia.dr2epochflux 
    JOIN gaia.dr2light
    USING (source_id)
    WHERE parallax>50
    

    into “ADQL” text to retrieve lightcurves for the more nearby variables (in reality, you’d have to be a bit more careful with the distances, but you already knew that).

  3. plot something like phot_bp_mean_mag-phot_rp_mean_mag vs. phot_g_mean_mag (and adapt the plot to fit your viewing habits).
  4. Open the dialog for Views/Activation Actions (from the menu bar or the tool bar – same thing), check “Invoke Service”, choose “View Datalink Table”.
  5. Whenever you click on a a point in your CMD, a window will pop up in which you can choose between the time series in the various bands, and you can pull in the data from ESAC; to load a table, select “Load Table” from the actions near the foot of the datalink table and click “Invoke”.

Yeah. It’s clunky. Help us make it better with your fresh ideas for interfaces (and don’t be cross with us if we have to marry them with what’s technically feasible and readily generalised).

SSAP and Obscore

If you’re fed up with bleeding-edge tech, the light curves are also available through good old SSAP and Obscore. To use that, just get Splat (or another SSA client, preferably with a bit of time series support). Look for a Gaia DR2 time series service (you may have to update the service list before you find it), enter (in keeping with our LBV theme) S Dor as position and hit “Lookup” followed by “Send Query”. Just click on any result to just view the time series – and then apply Splat’s rich tool set to it.

Update (8.5.2018): Clusters

Here’s another quick application – how about looking for variable stars in clusters? This piece of ADQL should get you started:

SELECT TOP 100 
  source_id, ra, dec, parallax, g.pmra, g.pmdec,
  m.name, m.pmra AS c_pmra, m.pmde AS c_pmde, 
  m.e_pm AS c_e_pm,
  1/dist AS cluster_parallax
FROM 
  gaia.dr2epochflux
  JOIN gaia.dr2light AS g USING (source_id)
  JOIN mwsc.main AS m
  ON (1=CONTAINS(
    POINT(g.ra, g.dec),
    CIRCLE(m.raj2000, m.dej2000, rcluster)))
WHERE IN_UNIT(pmdec, 'deg/yr') BETWEEN m.pmde-m.e_pm*3 AND m.pmde+m.e_pm*3

– yes, you’ll want to constrain pmra, too, and the distance, and properly deal with error and all. But you get simple lightcurves for free. Just add them in the SELECT clause!

Space and Time not lost on the Registry

Histogram: observation dates of an image service

A histogram of times for which the Palomar-Leiden service has images: That’s temporal service coverage right there.

If you are an astronomer and you’ve ever tried looking for data in the Virtual Observatory Registry, chances are you have wondered “Why can’t I enter my position here?” Or perhaps “So, I’m looking for images in [NIII] – where would I go?”

Both of these are examples for the use of Space-Time Coordinates (STC) in data discovery – yes, spectral coordinates count as STC, too, and I could make an argument for it. But this post is about something else: None of this has worked in the Registry up to now.

It’s time to mend this blatant omission. To take the next steps, after a bit of discussion on some of the IVOA’s mailing lists, I have posted an IVOA note proposing exactly those last Thursday. It is, perhaps with a bit of over-confidence, called A Roadmap for Space-Time Discovery in the VO Registry. And I’d much appreciate feedback, in particular if you are a VO user and have ideas on what you’d like to do with such a facility.

In this post, I’d like to give a very quick run-down on what is in it for (1) VO users, (2) service operators in general, and (3) service operators who happen to run DaCHS.

First, users. We already are pretty good on spatial coverage (for about 13000 of almost 20000 resources), so it might be worth experimenting with that. For now, the corresponding table is only available on the RegTAP mirror at http://dc.g-vo.org/tap. There, you can try queries like

select ivoid from
rr.table_column
natural join rr.stc_spatial
where
  1=contains(gavo_simbadpoint('HDF'), coverage)
  and ucd like 'phot.flux;em.radio%'

to find – in this case – services that have radio fluxes in the area of the Hubble Deep Field. If these lines scare you or you don’t know what to do with the stupid ivoids, check the previous post on this blog – it explains a bit more about RegTAP and why you might care.

Similarly cool things will, hopefully, some day be possible in spectrum and time. For instance, if you were interested in SII fluxes in the crab nebula in the early sixties, you could, some day, write

SELECT ivoid FROM
rr.stc_temporal
NATURAL JOIN rr.stc_spectral
NATURAL JOIN rr.stc_spatial
WHERE
  1=CONTAINS(gavo_simbadpoint('M1'), coverage)
  AND 1=ivo_interval_overlaps(
    6.69e-7, 6.75e-7, 
    wavelength_start, wavelength_end)
  AND 1=ivo_interval_overlaps(
    36900, 38800,
    time_start, time_end)

As you can see, the spectral coordiate will, following (admittedly broken) VO convention, be given in meters of vacuum wavelength, and time in MJD. In particular the thing with the wavelength isn’t quite settled yet – personally, I’d much rather have energy there. For one, it’s independent of the embedding medium, but much more excitingly, it even remains somewhat sensible when you go to non-electromagnetic messengers.

A pattern I’m trying to establish is the use of the user-defined function ivo_interval_overlaps, also defined in the Note. This is intended to allow robust query patterns in the presence of two intrinsically interval-valued things: The service’s coverage and the part of the spectrum you’re interested in, say. With the proposed pattern, either of these can degenerate to a single point and things still work. Things only break when both the service and you figure that “Aw, Hα is just 656.3 nm” and one of you omits a digit or adds one.

But that’s academic at this point, because really few resources define their coverage in time and and spectrum. Try it yourself:

SELECT COUNT(*) FROM (
  SELECT DISTINCT ivoid FROM rr.stc_temporal) AS q

(the subquery with the DISTINCT is necessary because a single resource can have multiple rows for time and spectrum when there’s multiple distinct intervals – think observation campaigns). If this gives you more than a few dozen rows when you read this, I strongly suspect it’s no longer 2018.

To improve this situation, the service operators need to provide the information on the coverage in their resource records. Indeed, the registry schemas already have the notion of a coverage, and the Note, in its core, simply proposes to add three elements to the coverage element of VODataService 1.1. Two of these new elements – the coverage in time and space – are simple floating-point intervals and can be repeated in order to allow non-contiguous coverage. The third element, the spatial coverage, uses a nifty data structure called a MOC, which expands to “HEALPix Multi-Order Coverage map” and is the main reason why I claim we can now pull off STC in the Registry: MOCs let databases and other programs easily and quickly manipulate areas on the sphere. Without MOCs, that’s a pain.

So, if you have registry records somewhere, please add the elements as soon as you can – if you don’t know how to make a MOC: CDS’ Aladin is there to help. In the end, your coverage elements should look somewhat like this:

<coverage>
  <spatial>3/336,338,450-451,651-652,659,662-663 
    4/1816,1818-1819,1822-1823,1829,1840-1841</spatial>
  <temporal>37190 37250</temporal>
  <temporal>54776 54802</temporal>
  <spectral>3.3e-07 6.6e-07</spectral>
  <spectral>2.0e-05 3.5e-06</spectral>
  <waveband>Optical</waveband>
  <waveband>Infrared</waveband>
</coverage>

The waveband elements are remainders from VODataService 1.1. They are still in use (prominently, for one, in SPLAT), and it’s certainly still a good idea to keep giving them for the forseeable future. You can also see how you would represent multiple observing campaigns and different spectral ranges.

Finally, if you’re running DaCHS and you’re using it to generate registry records (and there’s almost no excuse for not doing so), you can simply write a coverage element into your RD starting with DaCHS 1.2 (or, if you run betas, 1.1.1, which is already available). You’ll find lots of examples at the usual place. As a relatively interesting example, the resource descriptor of plts. It has this:


  <updater spaceTable="data" spectralTable="data" mocOrder="4"/>
  <spectral>3.3e-07 6.6e-07</spectral>
  <temporal>37190 37250</temporal>
  <temporal>38776 38802</temporal>
  <temporal>41022 41107</temporal>
  <temporal>41387 41409</temporal>
  <temporal>41936 41979</temporal>
  <temporal>43416 43454</temporal>
  <spatial>3/282,410 4/40,323,326,329,332,387,390,396,648-650,1083,1085,1087,1101-1103,1123,1125,1132-1134,1136,1138-1139,1144,1146-1147,1173-1175,1216-1217,1220,1223,1229,1231,1235-1236,1238,1240,1597,1599,1614,1634,1636,1728,1730,1737,1739-1740,1765-1766,1784,1786,2803,2807,2809,2812</spatial>
</coverage>

This particular service archives plate scans from the Palomar-Leiden Trojan surveys; these were looking for Trojan asteroids (of Jupiter) using the Palomar 122 cm Schmidt and were conducted in several shortish campaigns between 1960 and 1977 (incidentally, if you’re looking for things near the Ecliptic, this stuff might still hold valuable insights for you). Because the fill factor for the whole time period is rather small, I manually extracted the time coverage; for that, I ran select dateobs from plts.data via TAP and made the histogram plot above. Zooming in a bit, I read off the limits in TOPCAT’s coordinate display.

The other coverages, however, were put in automatically by DaCHS. That’s what the updater element does: for each axis, you can say where DaCHS should look, and it will then fill in the appropriate data from what it guesses gives the relevant coordiantes – that’s straightforward for standard tables like the ones behind SSAP and SIAP services (or obscore tables, for that matter), perhaps a bit more involved otherwise. To say “just do it for all axis”, give the updater a single sourceTable attribute.

Finally, in this case I’m overriding mocOrder, the order down to which DaCHS tries to resolve spatial features. I’m doing this here because in determining the coverage of image services DaCHS right now only considers the centers of the images, and that’s severely underestimating the coverage here, where the data products are the beautiful large Schmidt plates. Hence, I’m lowering the resolution from the default 6 (about one degree linearly) to still give some approximation to the actual data coverage. We’ll fix the underlying deficit as soon as pgsphere, the postgres extension which is actually dealing with all the MOCs, has support for turning circles and polygons into MOCs.

When you have defined an updater, just run dachs limits q.rd, and DaCHS will carefully (preserving your indentation) re-write the RD to contain what DaCHS has worked out from your table (but careful: it will overwrite what was previously there; so, make sure you only ask DaCHS to only deal with axes you’re not dealing with manually).

If you feel like writing code discovering holes in the intervals, ideally already in the database: that would be great, because the tighter the intervals defined, the fewer false positives people will have in data discovery.

The take-away for DaCHS operators is:

  1. Add STC coverage to your resources as soon as you’ve updated to DaCHS 1.2
  2. If you don’t have to have the tightest coverage declaration conceivable, all you have to do to have that is add
      <coverage>
        <updater sourceTable="my_table"/>
      </coverage>

    to your RD (where my_table is the id of your service’s “main” table) and then run dachs limits q.rd

  3. For special effects and further information, see Coverage Metadata in the DaCHS reference documentation
  4. If you have a nice postgres function that splits a simple coverage interval up so the filling factor of a set of new intervals increases (or know a nice, database-compatible algorithm to do so) – please let me know.

DaCHS 1.1 released

Today, I have released DaCHS 1.1, with the main selling point that DaCHS should now speak TAP 1.1 (as defined in the current draft).

First off, if you’re not yet on DaCHS 1.0, please read the corresponding release article before upgrading.

As usual, the general upgrading instructions are available in the operator’s guide (in short: do a dachs val ALL before the Debian upgrade). This time, I’d recommend to use the opportunity to upgrade your underlying server to stretch if you haven’t done so already. If you do that, please have a look at hints on postgres upgrades. Stretch comes with postgres 9.6 (jessie: 9.4). Postgres upgrades are generally safe, but please take a dump before migrating anyway.

So, with this out of the way, here’s a short list of the major changes from DaCHS 1.0 to DaCHS 1.1:

    • DaCHS now officially requires python 2.7. If this really is a problem for you, please shout – if wouldn’t be hard to maintain 2.6 compatibility, but by now we feel there’s no reason to bother any more.
    • Now supporting TAP 1.1; in particular, TOP n doesn’t trump MAXREC any more, and it doesn’t affect OVERFLOW indication, which may break things that used TOP to override DaCHS’ default TAP match limit of 2000. Also, TAP_SCHEMA is updated (this happens as a side effect of dachs upgrade).
    • Now serialising spoint, scircle, and friends to DALI 1.1 xtypes (timestamp, point, polygon, circle). Fields explicitly marked with adql:POINT or adql:REGION will still be serialised to STC-S. Do this only if you have no choice (DaCHS has this for obscore and epntap s_region right now).
    • The output column selection is sanitised. This may make for slight changes in service responses, in particular in VOTable formats. See Output Tables in the reference documentation for details if you think this might hit you.
    • DaCHS no longer comes with an outdated version pyparsing and instead uses what’s installed on the system. The Debian package further re-uses additional system resources if available (rjsmin, jquery).
    • DaCHS now tries a bit harder to come up with sensible names for SODA result files.
    • map/@source is no longer limited to identifier-like strings; any key that’s in your source is fair game.
    • For incremental imports with data that’s updated now and then, there’s now ignoreSources/@fromdbUpdating.
    • Relative imports from custom code (“import foo” in a custom core, for instance, getting res/foo.py) no longer work. See Importing Modules in the reference documentation for details.
    • This release fixes a severe bug in the creation of obscore metadata from SSAP tables. If you use //obscore#publishSSAPHCD or //obscore#publishSSAPMIXC mixins, update the obscore definitions by running dachs imp -m <rdid>, followed by dachs imp //obscore (the latter is only necessary once at the end).
    • You can now define a footer.html template that’s added at the foot of the main page content – with a bit of CSS magic, this lets you overwrite almost anything on DaCHS HTML pages.

    As always, please complain early if something breaks for you; our regression tests can only cover so much. In particular, our support list is there for you.

    Update (2017-12-06): In particular on jessie, you may
    see that all DaCHS packages are being held back. To resolve this
    situation, manually say apt-get install python-gavoutils<br />
    python-gavostc
    .

See Who’s Kinking the Sky

A new arrival in the GAVO Data Center is UCAC5, another example of a slew of new catalogs combining pre-existing astrometry with Gaia DR1, just like the HSOY catalog we’ve featured here a couple of weeks back.

That’s a nice opportunity to show how to use ADQL’s JOIN operator for something else than the well-known CONTAINS-type crossmatch. Since both UCAC5 and HSOY reference Gaia DR1, both have, for each object, a notion which element of the Gaia source catalog they correspond to. For HSOY, that’s the gaia_id column, in UCAC5, it’s just source_id. Hence, to compare results from both efforts, all you have to do is to join on source_id=gaia_id (you can save yourself the explicit table references here because the column names are unique to each table.

So, if you want to compare proper motions, all you need to do is to point your favourite TAP client’s interface to http://dc.g-vo.org/tap and run

SELECT 
    in_unit(avg(uc.pmra-hsoy.pmra), 'mas/yr') AS pmradiff, 
    in_unit(avg(uc.pmde-hsoy.pmde), 'mas/yr') AS pmdediff, 
    count(*) as n, 
    ivo_healpix_index (6, raj2000, dej2000) AS hpx 
    FROM hsoy.main AS hsoy 
    JOIN ucac5.main as uc 
    ON (uc.source_id=hsoy.gaia_id) 
    WHERE comp IS NULL    -- hsoy junk filter
    AND clone IS NULL     -- again, hsoy junk filter
    GROUP BY hpx

(see Taylor et al’s All of the Sky if you’re unsure what do make of the healpix/GROUP BY magic).

Of course, the fact that both tables are in the same service helps, but with a bit of upload magic you could do about the same analysis across TAP services.

Just so there’s a colourful image in this post, too, here’s what this query shows for the differences in proper motion in RA:

(equatorial coordinates, and the aux axis is a bit cropped here; try for yourself to see how things look for PM in declination or when plotted in galactic coordinates).

What does this image mean? Well, it means that probably both UCAC5 and HSOY would still putt kinks into the sky if you wait long enough.

In the brightest and darkest points, if you waited 250 years, the coordinate system induced by each catalog on the sky would be off by 1 arcsec with respect to the other (on a sphere, that means there’s kinks somewhere). It may seem amazing that there’s agreement to at least this level between the two catalogs – mind you, 1 arcsec is still more than 100 times smaller than you could see by eye; you’d have to go back to the Mesolithic age to have the slightest chance of spotting the disagreement without serious optical aids. But when Gaia DR2 will come around (hopefully around April 2018), our sky will be more stable even than that.

Of course, both UCAC5 and HSOY are, indirectly, standing on the shoulders of the same giant, namely Hipparcos and Tycho, so the agreement may be less surprising, and we strongly suspect that a similar image will look a whole lot less pleasant when Gaia has straightened out the sky, in particular towards weaker stars.

But still: do you want to bet if UCAC5 or HSOY will turn out to be closer to a non-kinking sky? Let us know. Qualifications („For bright stars…”) are allowed.

Automating TAP queries

TOPCAT is a great tool – in particular, for prototyping and ad-hoc analyses, it’s hard to beat. But say you’ve come up with this great TAP query, and you want to preserve it, perhaps generate plots, then maybe fix them when you (or referees) have spotted problems.

Then it’s time to get acquainted with TOPCAT’s command line sister STILTS. This tool lets you do most of what TOPCAT can, but without user intervention. Since the command lines usually take a bit of twiddling I usually wrap stilts calls with GNU make, so I just need to run something like make fig3.pdf or make fig3.png and make figures out what to do, potentially starting with a query. Call it workflow preservation if you want.

How does it work? Well, of course with a makefile. In that, you’ll first want to define some variables that allow more concise rules later. Here’s how I usually start one:

STILTS?=stilts

# VOTables are the results of remote queries, so don't wantonly throw
# them away
.PRECIOUS: %.vot

# in this particular application, it helps to have this global
HEALPIX_ORDER=6

# A macro that contains common stuff for stilts TAP query -- essentially,
# just add adql=
TAPQUERY=$(STILTS) tapquery \
  tapurl='http://dc.g-vo.org/tap' \
  maxrec=200000000 \
  omode=out \
  out=$@ \
  ofmt=vot \
  executionduration=14400 

# A sample plot macro.  Here, we do a healpix plot of some order. Also
# add value_1=<column to plot>
HEALPIXPLOT=$(STILTS) plot2sky \
  auxmap=inferno \
  auxlabel='$*'\
  auxvisible=true \
  legend=false \
  omode=out \
  out=$@ \
  projection=aitoff \
  sex=false \
  layer_1=healpix \
    datalevel_1=$(HEALPIX_ORDER) \
    datasys_1=equatorial \
    viewsys_1=equatorial \
    degrade_1=0 \
    combiner_1=sum \
    transparency_1=0 \
    healpix_1=hpx \
    in_1=$< \
    ifmt_1=votable \
    istream_1=true \

For the somewhat scary STILS command lines, see the STILTS documentation or just use your intution (which mostly should give you the right idea what something is for).

The next step is to define some pattern rules; these are a (in the syntax here, GNU) make feature that let you say „to make a file matching the destination pattern when you have one with the source pattern, use the following commands”. You can use a number of variables in the rules, in particular $@ (the current target) and $< (the first prerequisite). These are already used in the definitions of TAPQUERY and HEALPIXPLOT above, so they don’t necessarily turn up here:

# healpix plots from VOTables; these will plot obs
%.png: %.vot
	$(HEALPIXPLOT) \
  	value_1=obs \
  	ofmt=png \
  	xpix=600 ypix=380 

%.pdf: %.vot
	$(HEALPIXPLOT) \
  	value_1=obs \
  	ofmt=pdf

# turn SQL statements into VOTables using TAP
%.vot: %.sql
	$(TAPQUERY) \
  	adql="`cat $<`"

Careful with cut and paste: The leading whitespace here must be a Tab in rules, not just some blanks (this is probably the single most annoying feature of make. You’ll get used to it.)

What can you do with it? Well, for instance you can write an ADQL query into a file density.sql; say:

SELECT
  count(*) as obs,
  -- "6" in the next line must match HEALPIX_ORDER in the Makefile
  ivo_healpix_index (6, alphaFloat, deltaFloat) AS hpx
FROM ppmx.data
GROUP BY hpx

And with this, you can say

make density.pdf

and get a nice PDF with the plot resulting from that query. Had you just said make density.vot, make would just have executed the query and left the VOTable, e.g., for investigation with TOPCAT, and if you were to type make density.png, you’d get a nice PNG without querying the service again. Like this:

<img src="https://blog.g-vo.org/wp-content/uploads/2017/02/density.png" alt="" width="600" height="380" class="alignnone size-full wp-image-81" />

Unless of course you changed the SQL in the meantime, in which case make would figure out it had to go back to the service.

In particular for the plots you’ll often have to override the defaults. Make is smart enough to figure this out. For instance, you could have two files like this:

$ cat pm_histogram.sql
SELECT
  round(pmtot/10)*10 as bin, count(*) as n 
FROM ( 
  SELECT sqrt(pmra*pmra+pmde*pmde)*3.6e6 as pmtot
  FROM hsoy.main) AS q
group by bin
$ cat pm_histogram_cleaned.vot
SELECT 
  round(pmtot/10)*10 as bin, 
  count(*) as n 
  FROM ( 
    SELECT sqrt(pmra*pmra+pmde*pmde)*3.6e6 as pmtot 
    FROM hsoy.main
    WHERE no_sc IS NULL) AS q 
  group by bin

(these were used to analyse the overall proper motions distributions in HSOY properties; note that each of these will run about 30 minutes or so, so better adapt them to what’s actually interesting to you before trying this).

No special handling in terms of queries is necessary for these, but the plot needs to be hand-crafted:

pm_histograms.png: pm_histogram.vot pm_histogram_cleaned.vot
	$(STILTS) plot2plane legend=false \
	  omode=out ofmt=png out=$@ \
  	title="All-sky" \
  	xpix=800 ypix=600 \
  	ylog=True xlog=True\
  	xlabel="PM bin [mas/yr, bin size=10]" \
  	xmax=4000 \
	layer1=mark \
  	color1=blue \
  	in1=pm_histogram.vot \
  	x1=bin \
  	y1=n \
	layer2=mark \
  	in2=pm_histogram_cleaned.vot \
  	x2=bin \
  	y2=n

– that way, even if you go back to the stuff six months later, you can still figure out what you queried (the files are still there) and what you did then.

A makefile to play with (and safe from cut-and-paste problems) is available from Makefile_tapsample (rename to Makefile to reproduce the examples).

PPMXL+Gaia DR1=HSOY in the Heidelberg Data Center

The stunning precision of Gaia’s astrometry is already apparent in the first release of the data obtained by the satellite, available since last September. However, apart from the small TGAS subset (objects already observed by the 90ies HIPPARCOS astrometry satellite) there is no information on the objects’ proper motions in DR1.

Until Gaia-quality proper motions will become available in DR2, the HSOY catalog – described in Altmann et al’s paper Hot Stuff for One Year (HSOY) freshly up in arXiv and online at http://dc.g-vo.org/hsoy – can help if you can live with somewhat lesser-quality kinematics.

It derives proper motions for roughly half a billion stars from PPMXL and Gaia DR1, which already gives an unprecedented source for 4D astrometry around J2015. And you can start working with it right now. The catalog is available in GAVO’s Heidelberg data center (TAP access URL: http://dc.g-vo.org/tap; there’s also an SCS service). Fire up your favourite TAP or SCS client (our preference: TOPCAT) and search for HSOY.

Image: Errors in proper motion in declination in HSOY on the sky

HSOY average errors in proper motion in declination over the sky, in mas/yr. The higher errors south of -30 degrees are because the great sky surveys of the 50ies could not be extended to the southern sky, and thus the first epoch there typically is in the 1980ies.

Oh, and in case you’re new to the whole TAP/ADQL game: There’s our ADQL introduction, and if you’re at a German astronomical institution, we’d be happy to hold one of our VO Days at your institute – just drop us a mail.