HTTPS in DaCHS

SAMP button vanishing
Just a small part of the changes in preparation for HTTPS support: If DaCHS realises it’s queried via HTTPS, it removes the SAMP button (that doesn’t work then).

(Warning: No astronomy-relevant content at all this time).

I can’t say I’m a big fan of the mighty push towards HTTPS that’s going on right now – as I’m arguing in the updated operator’s guide it doesn’t do people’s privacy a lot of good (compared to, say, pushing for browsers to not execute Javascript by default or have DNSSEC widely deployed), but it’s a fairly substantial operational liability. With HTTPS, operators have to deal with cryptographic material, regularly update their certificates, restart their services in time and assemble the whole thing correctly (don’t get me started about proxying, SNI, and all those horrors). Users, on the other hand, have to keep their CA certificates in order, in particular when they do programmatic VO access, where the browser vendors, their employers and who knows who else doesn’t do it for them. Pop quiz: How would you install a new CA certificate on your box? And will your default browser see it?

But on the other hand, there are some scenarios in which HTTPS makes sense, and I can remotely fantasise that some of those may even be relevant to the VO. And people have been asking for HTTPS in DaCHS a number of times, at times even because their administrations urged them to switch. So, here it is, hopefully. Turning it on is reasonably easy when you use Letsencrypt (which in particular entails having ports 80 and 443); the section on Letencrypt in the operator’s guide tells what to do. In particular don’t forget the cron job, because without it, things would break after three months (when the initial certificate expires).

Things get difficult after that. For one, if your box is known under several names (our data center, for instance, can be reached as any of dc.g-vo.org, vo.uni-hd.de, and dc.zah.uni-heidelberg.de; this of course also includes things like www.example.org and example.org), you’ll now have to tell DaCHS about it in the new [web]alternateHostnames configuration item; for instance, we have

[web]
serverURL: http://dc.zah.uni-heidelberg.de
alternateHostnames:dc.g-vo.org, vo.uni-hd.de

in our /etc/gavo.rc.

And then the Registry has to know you have https. There’s actually no convention for that in the VO yet. But since I’d really like to have at least fallback interfaces with plain HTTP, we’ll have to come up with something. For now, my plan is to have the alternative protocol (i.e., HTTPS for sites that have an HTTP-serverURL and vice versa) using the brand-new VOResource 1.1 mirrorURLs (in RegTAP 1.1, they are in the mirror_url column rr.interface). To make DaCHS declare the alternate URLs, set [web]registerAlternative to True.

Another change I’ve introduced for HTTPS is that the default HTML template for the form renderer (i.e., the one people use who come with a browser) now suppresses the SAMP button if the request came in through HTTPS; that’s because WebSAMP doesn’t work with HTTPS and probably never will – at least I can’t see a way to make it happen without totally wrecking what security guarantees HTTPS gives.

All this doesn’t yet cater for the case when you use a reverse proxy to terminate HTTPS. If you are in that situation, please talk to me so we can figure out a sane way for you explain to DaCHS what to tell the Registry.

Anyway, if you want to try things out, just switch to the beta repostitory and upgrade. Feedback is highly welcome.

Oh, and if you’re a client developer: Our data center is now reachable through HTTPS (at https://dc.g-vo.org), and we already have pushed the records with mirrorURLs declaring HTTPS support to the RegTAP service at dc.g-vo.org (the others will have to wait a bit longer, as we haven’t re-published our registry records yet (it’s all experimental, after all).

Deredden using TAP

An animated color-magnitude diagram
Raw and dereddened CMD for a region in Cygnus.

Today I published a nice new service on our TAP service: The Bayestar17 3D dust map derived from Pan-STARRS 1 by Greg Green et al. I mention in passing that this was made particularly enjoyable because Greg and friends put an explicit license on their data (in this case, CC-BY-SA).

This dust map is probably a fascinating resource by itself, but the really nifty thing is that you can use it to correct all kinds of photometric data for extinction – at least to some extent. On the Bayestar web page, the authors give some examples for usage – and with our new service, you can use TAP as well to correct photometry for extinction.

To see how, first have a look at the table metadata for the prdust.map_union table; this is what casual users probably should look at. More specifically, at the coverage, best_fit, and grdiagnostic columns.

coverage here is an interval of 10-healpixes. It has to be an interval because the orginal data comes on wildly different levels; depending on the density of stars, sometimes it takes the area of a 6-healpix (about a square degree) to get enough signal, whereas in the galactic plane a 10-healpix (a thousandth of a square degree) already has enough stars. To make the whole thing conveniently queriable without exploding a 6-healpix row into 1000 identical rows, larger healpixes translate into intervals of 10-helpixes. Don’t panic, though, I’ll show how to conveniently query this below.

best_fit and grdiagnostic are arrays (remember the light cuves in Gaia DR2?). In bins of 0.5 in distance modulus (which is, in case you feel a bit uncertain as to the algebraic signs, 5 log10(dist)-5 for a distance in parsec), starting with a distance modulus of 4 and ending with 19. This means that for a distance modulus of 4.2 you should check the array index 0, whereas 4.3 already would be covered by array index 1. With this, best_fit[ind] gives E(B-V) = (B-V) – (B-V)0 in the direction of coverage in a distance modulus bin of 2*ind+4. For each best_fit[ind], grdiagnostic[ind] contains a quality measure for that value. You probably shouldn’t touch the E(B-V) if that measure is larger than 1.2.

So, how does one use this?

To try things, let’s pull some Gaia data with distances; in order to have interesting extinctions, I’m using a patch in Cygnus (RA 288.5, Dec 2.3). If you live on the northern hemisphere and step out tonight, you could see dust clouds there with the naked eye (provided electricity fails all around, that is). Full disclosure: I tried the Coal Sack first but after checking the coverage of the dataset – which essentially is the sky north of -30 degrees – I noticed that wouldn’t fly. But stories like these are one reason why I’m making such a fuss about having standard STC coverage representations.

We want distances, and to dodge all the intricacies involved when naively turning parallaxes to distances discussed at length
in a paper by Xavier Luri et al (and elsewhere), I’m using precomputed distances from Bailer-Jones et al. (2018AJ….156…58B); you’ll find them on the “ARI Gaia” service; in TOPCAT’s TAP dialog simply search for “Gaia” – that’ll give you the GAVO DC TAP search, too, and that we’ll need in a second.

The pre-computed distances are in the gaiadr2_complements.geometric_distance table, which can be joined to the main Gaia object catalog using the source_id column. So, here’s a query to produce a little photometric catalog around our spot in Cygnus (we’re discarding objects with excessive parallax errors while we’re at it):

SELECT 
r_est, 5*log10(r_est)-5 as dist_mod,
phot_g_mean_mag, phot_bp_mean_mag, phot_rp_mean_mag,
ra, dec
FROM
gaiadr2.gaia_source
JOIN gaiadr2_complements.geometric_distance
USING (source_id)
WHERE
parallax_over_error>1
AND 1=CONTAINS(POINT('ICRS', ra, dec), CIRCLE('ICRS', 288.5, 2.3, 0.5 ))

The color-magnitude diagram resulting from this is the red point cloud in the animated GIF at the top. To reproduce it, just plot phot_bp_mean_mag-phot_rp_mean_mag against phot_g_mean_mag-dist_mod (and invert the y axis).

De-reddening this needs a few minor technicalities. The most important one is how to match against the odd intervals of healpixes in the prdust.map_union table. A secondary one is that we have only pulled equatorial coordinates, and the healpixes in prdust are in galactic coordinates.

Computing the healpix requires the ivo_healpix_index ADQL user defined function (UDF) that you may have met before, and since we have to go from ICRS to Galactic it requires a fairly new UDF I’ve recently defined to finally get the discussion on having a “standard library” of astrometric functions in ADQL going: gavo_transform. Here’s how to get a 10-healpix as required for map_union from ra and dec:

CAST(ivo_healpix_index(10, 
  gavo_transform('ICRS', 'GALACTIC', POINT(ra, dec))) AS INTEGER)

The CAST call is a pure technicality – ivo_healpix_index returns a 64-bit integer, which I can’t use in my interval logic.

The comparison against the intervals you could do yourself, but as argued in Registry-STC article this is one of the trivial things that are easy to get wrong. So, let’s use the ivo_interval_overlaps UDF; it goes in the join condition to properly match prdust healpixes to catalog positions. Then our total query – that, I hope, should be reasonably easy to adapt to similar problems – is:

WITH sources AS (
  SELECT phot_g_mean_mag, 
    phot_bp_mean_mag, 
    phot_rp_mean_mag,
    dist_mod,
    CAST(ivo_healpix_index(10, 
      gavo_transform('ICRS', 'GALACTIC', POINT(ra, dec))) AS INTEGER) AS hpx,
    ROUND((dist_mod-4)*2)+1 AS dist_mod_bin
  FROM TAP_UPLOAD.T1)

SELECT
  phot_bp_mean_mag-phot_rp_mean_mag-dust.best_fit[dist_mod_bin] AS color,
  phot_g_mean_mag-dist_mod+
    dust.best_fit[dist_mod_bin]*3.384 AS abs_mag,
  dust.grdiagnostic[dist_mod_bin] as qual
FROM sources
JOIN prdust.map_union AS dust
ON (1=ivo_interval_has(hpx, coverage))

(If you’re following along: you have to switch to the GAVO DC TAP to run this, and you will probably have to change the index after TAP_UPLOAD).

Ok, in the photometry department there’s a bit of cheating going on here – I’m correcting Gaia B-R with B-V, and I’m using the factor for Johnson V to estimate the extinction in Gaia G (if you’re curious where that comes from: See the footnote on best_fit and the MC extinction service docs should get you started), so this is far from physically correct. But, as you can see from the green cloud in the plot above, it already helps a bit. And if you find out better factors, by all means let me know so I can add an update… right here:

Update (2018-09-11): The original data creator, Gregory Green points out that the thing with having a better factor for Gaia G isn’t that simple, because, as he says “Gaia G is very broad, [and] the extinction coefficients are much more dependent on stellar type, and extinction is also more nonlinear with dust column (extinction is only linear with dust column and independent of stellar type for an infinitely narrow passband)”. So – when de-reddening, prefer narrow passbands. But whether narrow or wide: TAP helps you.

DaCHS 1.2 is out

Today, I have released DaCHS 1.2 – somewhat belatedly perhaps, because I managed to break my collarbone, but here it is. If you’ve been following this blog, you already know about the headline news: the dachs start command, ADQL 2.1, and early support for STC in the registry.

If you’re not yet on DaCHS 1.1, please have a quick look at the corresponding release article. While the upgrade itself should work fine in one go even from older versions, the release notes of course apply cumulatively, and you may still have to do the dist-upgrade to 1.1.

As usual, the generic upgrading instructions are available in the operator’s guide (in short: do a dachs val ALL; apt update; apt upgrade). Since I’ve still encountered DaCHS installations with wrong sources.lists last April: Note again that our repository names have changed in August 2016 – we now have release and beta rather than Debian release names. So, make sure you have something like

deb http://vo.ari.uni-heidelberg.de/debian release main

in your /etc/apt/sources.list, not something containing “stable” or the like.

That said, here’s the commented changes for 1.2:

  • New dachs start command to produce structured templates for certain service types. See Horror Vacui Begone on this blog for the full story.
  • Support for ADQL 2.1 (actually, its current proposed recommendation), including almost all of the optional parts (see Speak out on ADQL 2.1 on this blog). While not strictly necessary, it’s a good idea to run dachs imp //adql after the upgrade; this will give you some nice new UDFs, in particular gavo_histogram.
  • New coverage element (with updaters) to build and declare the space-time-spectral coverage of a resource. It would be great if you could add coverage elements to your resources where it makes sense and re-publish them. This blog post tells you how to do it (you’ll have to scroll down a bit).
  • There is now odbcGrammar to feed an import from another database. Essentially, you put an ODBC connection string into a file, point your sources element there, and you’ll get one rawdict per tuple in a foreign database table. This might be a nice way to publish moderate-size non-postgres tables via DaCHS.
  • You can now declare associated datalink services for tables using the _associatedDatalinkSvc meta item. In particular, if you had a datalink property on SSAP services, you should migrate at some point. One advantage: Users will get the datalinks even when querying the tables through TAP. See “Integrating Datalink Services” in the reference documentation for the full story.
  • We now force matplotlib to read its configuration from /var/gavo/etc/matplotlibrc; to get a default, just run dachs init again. This is mainly to avoid uncontrolled imports of matplotlibrcs when DaCHS is run under a uid that does other things now and then.
  • DaCHS now supports VOSI 1.1; in particular, DaCHS now understands the detail hints and has per-table endpoints, so clients like TOPCAT could avoid reading the full table metadata in one go. Realistically, at least TOPCAT doesn’t yet, so this is perhaps less cool than it may sound.
  • The indices generated by the ssa mixins are now a bit more sensible considering typical query modes. You probably want to run dachs imp -I on the RDs for your ssap data collections when convenient. If you have larger spectral collections, chances are many queries will be a lot faster.
  • ssapCore no longer wantonly adds preview columns. If you have previews with spectra, you probably want to add <property name="previews">auto</property> to your ssapCores. If you don’t, the preview column will not be added to SSA responses (right now, few clients evaluate it, but that will hopefully change in the future).
  • You can now add a statisticsTarget property to columns; you will want this on largish tables with non-uniformly distributed values to aid the query planner; something like <property key=" statisticsTarget">10000</property> within the corresponding column element can go a long way to improve query planning (you need to run gavo imp -m on the RD after the change).
  • DaCHS’s log now by default does not contain IP addresses, user agents, and referrers any more, which should mostly keep you from processing personal data and thus from having to muck around with the EU GDPR. To get back the previous behaviour, set [web]logFormat in /etc/gavo.rc to combined.
  • I fixed some utypes for obscore 1.1. These utypes are useless, so there’s nothing you have to do. But then stilts taplint complains about them, and so you may want to run dachs imp -m //obscore.
  • As usual, there are many minor bug fixes and improvements (e.g., memmapping FITSes for cutout again, delimited table references in ADQL, new-style tutorial resource records, correct obscore standardId, much saner nD-arrays in VOTables).

Well – enjoy the release, and if something goes wrong with it, be sure to let us know, preferably on the DaCHS-suppport mailing list.

Gaia DR2: A light version and light curves

screenshot: topcat and matplotlib
Topcat is doing datalink, and our little python script has plotted a two-color time series of RMC 18 (or so I think).

If anyone ever writes a history of the VO, the second data release of Gaia on April 25, 2018 will probably mark its coming-of-age – at least if you, like me, consider the Registry the central element of the VO. It was spectacular to view the spike of tens of Registry queries per second right around 12:00 CEST, the moment the various TAP services handing out the data made it public (with great aplomb, of course).

In GAVO’s Data Center we also carry Gaia DR2 data. Our host institute, the Zentrum für Astronomie in Heidelberg, also has a dedicated Gaia server. This gives relieves us from having to be a true mirror of the upstream data release. And since the source catalog has lots and lots of columns that most users will not be using most of the time, we figured a “light” version of the source catalog might fill an interesting ecological niche: Behold gaia.dr2light on the GAVO DC TAP service, containing essentially just the basic astrometric parameters and the diagonal of the covariance matrix.

That has two advantages: Result sets with SELECT * are a lot less unwieldy (but: just don’t do this with Gaia DR2), and, more importantly, a lighter table puts less load on the server. You see, conventional databases read entire rows when processing data, and having just 30% of the columns means we will be 3 times faster on I/O-bound tasks (assuming the same hardware, of course). Hence, and contrary to several other DR2-carrying sites, you can perform full sequential scans before timing out on our TAP service on gaia.dr2light. If, on the other hand, you need to do debugging or full-covariance-matrix error calculations: The full DR2 gaia_source table is available in many places in the VO. Just use the Registry.

Photometry via TAP

A piece of Gaia DR2 that’s not available in this form anywhere else is the lightcurves; that’s per-transit photometry in the G, BP, and RP band for about 0.5 million objects that the reduction system classified as variable. ESAC publishes these through datalink from within their gaia_source table, and what you get back is a VOTable that has the photometry in the three bands interleaved.

I figured it might be useful if that data were available in a TAP-queriable table with lightcurves in the database. And that’s how gaia.dr2epochflux came into being. In there, you have three triples of arrays: the epochs (g_transit_time, bp_obs_time, and rp_obs_time), the fluxes (g_transit_flux, bp_flux, and rp_flux), and their errors (you can probably guess their names). So, to retrieve G lightcurves where available together with a gaia_source query of your liking, you could write something like

SELECT g.*, g_transit_time, g_transit_flux
FROM gaia.dr2light AS g
LEFT OUTER JOIN gaia.dr2epochflux
USING (source_id)
WHERE ...whatever...

– the LEFT OUTER JOIN arranges things such that the g_transit_time and g_transit_flux columns simply are NULL when there are no lightcurves; with a normal (“inner”) join, rows without lightcurves would not be returned in such a query.

To give you an idea of what you can do with this, suppose you would like to discover new variable blue supergiants in the Gaia data (who knows – you might discover the precursor of the next nearby supernova!). You could start with establishing color cuts and train your favourite machine learning device on light curves of variable blue supergiants. Here’s how to get (and, for simplicity, plot) time series of stars classified as blue supergiants by Simbad for which Gaia DR2 lightcurves are available, using pyvo and a little async trick:

from matplotlib import pyplot as plt
import pyvo

def main():
  simbad = pyvo.dal.TAPService(
    "http://simbad.u-strasbg.fr:80/simbad/sim-tap")
  gavodc = pyvo.dal.TAPService("http://dc.g-vo.org/tap")

  # Get blue supergiants from Simbad
  simjob = simbad.submit_job("""
    select main_id, ra, dec
    from basic
    where otype='BlueSG*'""")
  simjob.run()

  # Get lightcurves from Gaia
  try:
    simjob.wait()
    time_series = gavodc.run_sync("""
      SELECT b.*, bp_obs_time, bp_flux, rp_obs_time, rp_flux
      FROM (SELECT
         main_id, source_id, g.ra, g.dec
         FROM 
        gaia.dr2light as g
         JOIN TAP_UPLOAD.t1 AS tc
         ON (0.002>DISTANCE(tc.ra, tc.dec, g.ra, g.dec))
      OFFSET 0) AS b
      JOIN gaia.dr2epochflux
      USING (source_id)
      """, 
      uploads={"t1": simjob.result_uri})
  finally:
    simjob.delete()

  # Now plot one after the other
  for row in time_series.table:
    plt.plot(row["bp_obs_time"], row["bp_flux"])
    plt.plot(row["rp_obs_time"], row["rp_flux"])
    plt.show(block=False)
    raw_input("{}; press return for next...".format(row["main_id"]))
    plt.cla()

if __name__=="__main__":
  main()

If you bother to read the code, you’ll notice that we transfer the Simbad result directly to the GAVO data center without first downloading it. That’s fairly boring in this case, where the table is small. But if you have a narrow pipe for one reason or another and some 105 rows, passing around async result URLs is a useful trick.

In this particular case the whole thing returns just four stars, so perhaps that’s not a terribly useful target for your learning machine. But this piece of code should get you started to where there’s more data.

You should read the column descriptions and footnotes in the query results (or from the reference URL) – this tells you how to interpret the times and how to make magnitudes from the fluxes if you must. You probably can’t hear it any more, but just in case: If you can, process fluxes rather than magnitudes from Gaia, because the errors are painful to interpret in magnitudes when the fluxes are small (try it!).

Note how the photometry data is stored in arrays in the database, and that VOTables can just transport these. The bad news is that support for manipulating arrays in ADQL is pretty much zero at this point; this means that, when you have trained your ML device, you’ll probably have to still download lots and lots of light curves rather than write some elegant ADQL to do the filtering server-side. However, I’d be highly interested to work out how some tastefully chosen user defined functions might enable offloading at least a good deal of that analysis to the database. So – if you know what you’d like to do, by all means let me know. Perhaps there’s something I can do for you.

Incidentally, I’ll talk a bit more about ADQL arrays in a blog post coming up in a few weeks (I think). Don’t miss it, subscribe to our feed).

Datalink

In the results from queries involving gaia.dr2epochflux, we also provide datalinks. These let you retrieve lightcurves that already have mags and that are more easily plotted. Perhaps more importantly, they link back to the full ESAC lightcurves that, in addition, give you a lot more debug information and are required if you want to reliably identify photometry points with the identifiers of the transits that generated them.

Datalink support in clients still is not great, but it’s growing nicely. Your ideas for workflows that should be supported are (again) most welcome – and have a good chance of being adopted. So, try things out, for instance by getting the most recent TOPCAT (as of this writing) and do the following:

  1. Open the VO/TAP dialog from the menu bar and double click the GAVO DC TAP service.
  2. Enter
    SELECT source_id, ra, dec,
    phot_bp_mean_mag, phot_rp_mean_mag, phot_g_mean_mag,
    g_transit_time, g_transit_flux,
    rp_obs_time, rp_flux
    FROM gaia.dr2epochflux 
    JOIN gaia.dr2light
    USING (source_id)
    WHERE parallax>50
    

    into “ADQL” text to retrieve lightcurves for the more nearby variables (in reality, you’d have to be a bit more careful with the distances, but you already knew that).

  3. plot something like phot_bp_mean_mag-phot_rp_mean_mag vs. phot_g_mean_mag (and adapt the plot to fit your viewing habits).
  4. Open the dialog for Views/Activation Actions (from the menu bar or the tool bar – same thing), check “Invoke Service”, choose “View Datalink Table”.
  5. Whenever you click on a a point in your CMD, a window will pop up in which you can choose between the time series in the various bands, and you can pull in the data from ESAC; to load a table, select “Load Table” from the actions near the foot of the datalink table and click “Invoke”.

Yeah. It’s clunky. Help us make it better with your fresh ideas for interfaces (and don’t be cross with us if we have to marry them with what’s technically feasible and readily generalised).

SSAP and Obscore

If you’re fed up with bleeding-edge tech, the light curves are also available through good old SSAP and Obscore. To use that, just get Splat (or another SSA client, preferably with a bit of time series support). Look for a Gaia DR2 time series service (you may have to update the service list before you find it), enter (in keeping with our LBV theme) S Dor as position and hit “Lookup” followed by “Send Query”. Just click on any result to just view the time series – and then apply Splat’s rich tool set to it.

Update (8.5.2018): Clusters

Here’s another quick application – how about looking for variable stars in clusters? This piece of ADQL should get you started:

SELECT TOP 100 
  source_id, ra, dec, parallax, g.pmra, g.pmdec,
  m.name, m.pmra AS c_pmra, m.pmde AS c_pmde, 
  m.e_pm AS c_e_pm,
  1/dist AS cluster_parallax
FROM 
  gaia.dr2epochflux
  JOIN gaia.dr2light AS g USING (source_id)
  JOIN mwsc.main AS m
  ON (1=CONTAINS(
    POINT(g.ra, g.dec),
    CIRCLE(m.raj2000, m.dej2000, rcluster)))
WHERE IN_UNIT(pmdec, 'deg/yr') BETWEEN m.pmde-m.e_pm*3 AND m.pmde+m.e_pm*3

– yes, you’ll want to constrain pmra, too, and the distance, and properly deal with error and all. But you get simple lightcurves for free. Just add them in the SELECT clause!

Horror vacui begone

browser and editor
Mikhail’s qrdcreator in a browser and an editor with a dachs start-produced template.

One of the major usability issues our publishing suite DaCHS has for operators (i.e., people who want publish data) is the “horror vacui”: How do I start a Resource Descriptor (RD – the file DaCHS interprets to build services)?

I used to recommend to start by having a look at the RDs of our existing services and pick whatever matches best your publication project. But finding a matching service and figuring out what is generic, what’s a special property of the concrete data collection, and what’s a hack that should not be reproduced isn’t straightforward at all, not to mention the fact that some of those RDs have been in maintenance mode for almost 10 years and hence may show deprecated practices.

Then came the the VESPA implementation workshop last year, during which Mikhail Minin showed me a piece of javascript and HTML (source on github) he has written to overcome the empty editor window. Essentially, Mikhail has built a fairly comprehensive form interface in a web browser that asks people the right questions to eventually write an RD for EPN-TAP (i.e., solar system) resources.

I had planned to generalise Mikhail’s approach to several types of resources supported by DaCHS, ideally inferring the questions to ask from the built-in documentation of mixins and applys. But during the last year, whenever I felt it would be a good time to tackle that generalisation, I quickly gave up again. It was mostly rather trivial stuff such as how to tell apart repeatable metadata (waveband, say) and non-repeatable metadata (instrument, say). But it was bad enough that I quickly found something else to do each time I got started.

Eventually, I gave up on a menu interface altogether – making it flexible and generatable at the same time seemed a fairly complex problem. But that doesn’t mean I forgot about overcoming the horror vacui thing. So, when forms aren’t flexible enough for data entry, where do you turn? Right! A text editor.

Enter dachs start. That’s a new DaCHS subcommand that gets you started with your RD. For one, you can list the templates available:

$ dachs start list
siap -- Image collections via SIAP1 and TAP
ssap+datalink -- Spectra via SSAP and TAP, going through datalink
epntap -- Solar system data via EPN-TAP 2.0
scs -- Catalogs via SCS and TAP

More templates are planned; siap+datalink, for instance, would cover some frequent use cases. Feel free to mail in requests.

Once you find a suitable template, create your future resource directory, enter it and run dachs start again, this time passing the name of the template you want:

$ mkdir ex_data
$ cd ex_data
$ dachs start scs
$ head -16 q.rd | tail -9
<resource schema="ex_data">
  <meta name="creationDate">2018-04-13T12:34:31Z</meta>

  <meta name="title">%title -- not more than a line%</meta>
  <meta name="description">
    %this should be a paragraph or two (take care to mention salient terms)%
  </meta>
  <!-- Take keywords from 
    http://astrothesaurus.org/thesaurus/hierarchical-browse/

dachs start uses the directory name as the new schema name and then writes a file q.rd (which is the canonical name for the “main” RD in a resource). Within this file, you’ll see things to fill out between pairs of percent signs with short explanantions. Where longer explanations are necessary, embedded comments should help.

To give you an idea of the intended use: As a vim user, I’ve put

augroup rd
  au!
  au BufRead,BufNewFile *.rd imap  /%[^%]*%a
  au BufRead,BufNewFile *.rd imap  cf%
augroup END

into my ~/.vimrc. That way, while editing the template into an actual RD, hitting F8 takes me to the next thing to be edited; I can then read the instructions, and when I have made up my mind, I can either delete the template element or hit F9 and replace the explanation text with whatever belongs there.

The command is available starting with the 1.1.3 beta (available now by switching to the beta repo) and will be part of the 1.2 release, planned for early June after the Victoria interop.

If you have a publication project: just try it out and give feedback. Note that the templates haven’t actually been tested yet, and the comments were written by a DaCHS and VO nerd, so they might not always be great either. Thus, when you get stuck: complain early, complain often!

Speak out on ADQL 2.1

If you’ve always wanted to be part of a standardisation process within the IVOA (and who would not?), the time has rarely been as good as now. Because: We’re updating ADQL! Yes! The ADQL you are writing your queries in will receive a few more language elements, and we’re carefully trying to heal a few things that turned out to be warts. And while some of the changes are as dull and boring as you may expect standards work to be, on some of them you may wish to have a saying.

Also, you can try things out – the GAVO data center TAP endpoint at http://dc.g-vo.org/tap already has most of the proposed features, and the new DaCHS beta 1.1.2 (out since last Friday) does, too. So, if you’re running DaCHS yourself, you can start playing after switching to the beta repository.

What’s new?

  • You’re now supposed to write the standard crossmatch as DISTANCE(ra1, dec1, ra2, dec2)<dist. This replaces the old dance with 1=CONTAINS(POINT(), CIRCLE()) that you’ve probably learned to hate. Finally: Crossmatching without having to resort to TOPCAT’s example menu…
  • ADQL geometries used to require a first argument that would give the reference frame, as in POINT('ICRS', ra, dec). The hope was that services could then automagically make a statement like CONTAINS(point_in_icrs, circle_in_galactic) work as presumably intended. Few services ever did (DaCHS still tries reasonably hard), and when they did, there were all kinds of opaque oddities. One of the most common sources of confusion is the question what a service is supposed to do with POINT('GALACTIC', ra, dec), assuming it knows that ra and dec are in, say, B1950 FK4. Also, is there any expectation that services attempt to do anything beyond a simple rotation (FK4, for instance, rotates noticably against the ICRS, so proper motions would need to get fixed, too)? In all, the frame as a first argument was ill thought-out, and it’s been deprecated. Simply don’t put in the string-typed first argument any more. POINT(long, lat) does it. True: This, more than ever, calls for an ADQL astrometry library so you can easily convert, at least, between Galactic and ICRS (probably a few more would be useful, too). More on this in some future post.
  • Services should have CAST now. Sometimes you want to turn a number into a string or a string into a timestamp. In such cases, you can write CAST('1991-02-01', TIMESTAMP) now. The details are not quite, excuse me, cast in stone yet, so if you have a use case for this kind of thing, speak up now. The current draft also calls for a TIMESTAMP(tx) function – but since that’s really not different from CAST(tx, TIMESTAMP), I’m trying to dissuade people from adding it.
  • Services should have an IN_UNIT function now. That’s a nifty thing in particular when you’re re-using queries on different services. Just write, say, IN_UNIT(pmra, 'deg/yr') and never worry again if it’s arcsec/yr, mas/yr, rad/cy, or whatever. The second argument, by the way, is written according to the Units in the Virtual Observatory standard. It’s an optional feature according to the current standard, so perhaps it’s too early to party, but I’ve found this extremely useful, and so I hope we’ll see widespread adoption.
  • Services should now have set operations. These are UNION, EXCEPT, and INTERSECT and are useful when you have two queries that result in the same table schema (because they won’t work otherwise). Say you have two complex ways to filter rows from the table source, but you want to process both sorts of results further on – you can say then say something like
      SELECT <whatever complex> FROM
          (SELECT a,b,c FROM source
            WHERE <crazy stuff>
            GROUP BY a, b, c) as left
        UNION
          (SELECT a,b,c FROM source
            WHERE <other crazy stuff>
            GROUP BY a, b, c) as right
      WHERE <more complex stuff over a, b, and c>
    

    – and similarly, EXCEPT lets you “punch a hole” in a result table. Another interesting use case would be to query many tables on a service like VizieR in one go; that still works if you make sure the tables defined by the sub-queries have the same columns. Given that a lot of cross-table operations actually boil down to JOINs and WHERE clauses, the set operations are used less that one would expect. But if you need them, there’s no real alternative (short of downloading far too much and performing the operation locally, which of course defeats the purpose of TAP).

  • Common table expressions (“WITH”). DaCHS doesn’t do these yet, and it will only pick them up if someone else implements them first. In the way ADQL 2.1 has them (“nonrecursive”), CTEs are little more than syntactic sugar, and I’m not quite sure if the additional implementation complexity is worth it. If you’re curious, check CTEs in the postgres manual. If that makes you drool for WITH in ADQL, let me know. It’ll not be too hard to sway me to put them in.
  • Bitwise Operations. That’s when integers are treated as bit patterns. If this sounds like nerd stuff to you, well, it happens quite a bit in actual catalogs. See, for instance, Note 3 for the PPMXL. You’d need the flags column described there if you wanted to exclude PPMXL objects that replaced multiple USNO-B1.0 objects (bit 3), you will right now have to write something like MOD(flags,16)>7. That’s a bit of magic that everyone will have to think about for a while. With bitwise operations, you’ll just write BITWISE_AND(flags,8)=8, which will look familiar to everyone who has used the pattern before (in particular, it’s clear we’re talking about bit 3). There still is discussion whether bitwise operations are common enough to warrant special syntax – the draft currently says the above should be written as flags&8=8 – or whether the functions DaCHS has at the moment (they’re called BITWISE_AND, BITWISE_OR, BITWISE_XOR, and BITWISE_NOT) are good enough.
  • Offset. If you’ve ever done anything with ADQL, you’ll know that SELECT TOP 10 * FROM hipparcos.main ORDER BY parallax DESC will give you the 10 objects with the larges parallaxes. But what if you want the next but 10 closest stars? Well, OFFSET to the rescue:
    SELECT TOP 10 * 
    FROM hipparcos.main 
    ORDER BY parallax DESC
    OFFSET 10
    

    There is another, more sinister, application for OFFSET, which happens to be the actual reason I’ve put it into DaCHS’ ADQL ages ago: Written as OFFSET 0 several databases use it to denote a barries for the query planner. This is explained to some degree in the class DaCHS TAP example Crossmatch for a Guide Star – which still mentions the first hack I had built into DaCHS to let query authors rein in overzealous query planners.

  • LOWER and ILIKE. ADQL has been extremely weak on the side of text processing, so weak indeed that it wasn’t nearly enough to cover the use cases for the registry when it moved to RegTAP. ADQL 2.1 adds two basic features – LOWER, a function that lets people query in a case-insensitive fashion, and ILIKE, an operator that is like LIKE, but again ignores case. While both features are obviously great as soon as people dump any kind of text (think object names) into their databases, I’m not terribly happy with ILIKE, as it does the same as RegTAP’s ivoa_nocasematch user defined function, and it’s always bad when a two standards forsee two different mechanisms for the same thing.
  • Geometry-typed arguments. CIRCLE and POLYGON now accept POINTs in alternative constructor functions. That is, you can now say CIRCLE(POINT(ra, dec), radius) in addition to the traditional CIRCLE(ra, dec, radius). In itself, that’s probably not terribly exciting, but when you have actual POINTs in your database, it’s much more compact to write, say,
      SELECT *
      FROM zcosmos.data
      WHERE 0=CONTAINS(
        ssa_targetpos,
        CIRCLE(ssa_location, ssa_aperture))
    

    (which would return rows for those spectra for which the declared aperture does not contain the declared target). Before, you’d had to write some fairly ugly expression involving COORD1 and whatnot in order to achieve the same effect.

  • Boolean expressions. That’s another one that’s still a bit up in the air. First, the rough goal is to allow boolean values in ADQL-accessible tables, which so far have been a hack at best. In the future, you should be able to say WHERE is_broken=True. However, people coming from other languages will find that odd, and indeed, in python I’d cringe on if is_broken==True:. What I’d expect is if is_broken:. Do we want this in ADQL? Currently, it’s in the grammar (more or less like this), but this kind of thing makes it still harder to produce useful syntax error messages. Is it worth it, either way? I’m not sure.

That about concludes my quick review of the new features of ADQL 2.1. If you’d like to know more, the current draft is on the IVOA document repository, and if you can deal with version control (you should!), you can follow the bleeding edge in the ADQL document in Volute. Discussion happens on the DAL mailing list.

Update (2018-04-13): Well, as to the CTEs, I couldn’t resist after all, and they’re in with DaCHS 1.1.3. And I have to say a love them — they weren’t hard to put in, and once they’re there they make so many queries a good deal more readable than before. I’ve even put it a server-defined example for CTEs on the Heidelberg TAP service showcasing a particularly compelling use case.

Space and Time not lost on the Registry

Histogram: observation dates of an image service

A histogram of times for which the Palomar-Leiden service has images: That’s temporal service coverage right there.

If you are an astronomer and you’ve ever tried looking for data in the Virtual Observatory Registry, chances are you have wondered “Why can’t I enter my position here?” Or perhaps “So, I’m looking for images in [NIII] – where would I go?”

Both of these are examples for the use of Space-Time Coordinates (STC) in data discovery – yes, spectral coordinates count as STC, too, and I could make an argument for it. But this post is about something else: None of this has worked in the Registry up to now.

It’s time to mend this blatant omission. To take the next steps, after a bit of discussion on some of the IVOA’s mailing lists, I have posted an IVOA note proposing exactly those last Thursday. It is, perhaps with a bit of over-confidence, called A Roadmap for Space-Time Discovery in the VO Registry. And I’d much appreciate feedback, in particular if you are a VO user and have ideas on what you’d like to do with such a facility.

In this post, I’d like to give a very quick run-down on what is in it for (1) VO users, (2) service operators in general, and (3) service operators who happen to run DaCHS.

First, users. We already are pretty good on spatial coverage (for about 13000 of almost 20000 resources), so it might be worth experimenting with that. For now, the corresponding table is only available on the RegTAP mirror at http://dc.g-vo.org/tap. There, you can try queries like

select ivoid from
rr.table_column
natural join rr.stc_spatial
where
  1=contains(gavo_simbadpoint('HDF'), coverage)
  and ucd like 'phot.flux;em.radio%'

to find – in this case – services that have radio fluxes in the area of the Hubble Deep Field. If these lines scare you or you don’t know what to do with the stupid ivoids, check the previous post on this blog – it explains a bit more about RegTAP and why you might care.

Similarly cool things will, hopefully, some day be possible in spectrum and time. For instance, if you were interested in SII fluxes in the crab nebula in the early sixties, you could, some day, write

SELECT ivoid FROM
rr.stc_temporal
NATURAL JOIN rr.stc_spectral
NATURAL JOIN rr.stc_spatial
WHERE
  1=CONTAINS(gavo_simbadpoint('M1'), coverage)
  AND 1=ivo_interval_overlaps(
    6.69e-7, 6.75e-7, 
    wavelength_start, wavelength_end)
  AND 1=ivo_interval_overlaps(
    36900, 38800,
    time_start, time_end)

As you can see, the spectral coordiate will, following (admittedly broken) VO convention, be given in meters of vacuum wavelength, and time in MJD. In particular the thing with the wavelength isn’t quite settled yet – personally, I’d much rather have energy there. For one, it’s independent of the embedding medium, but much more excitingly, it even remains somewhat sensible when you go to non-electromagnetic messengers.

A pattern I’m trying to establish is the use of the user-defined function ivo_interval_overlaps, also defined in the Note. This is intended to allow robust query patterns in the presence of two intrinsically interval-valued things: The service’s coverage and the part of the spectrum you’re interested in, say. With the proposed pattern, either of these can degenerate to a single point and things still work. Things only break when both the service and you figure that “Aw, Hα is just 656.3 nm” and one of you omits a digit or adds one.

But that’s academic at this point, because really few resources define their coverage in time and and spectrum. Try it yourself:

SELECT COUNT(*) FROM (
  SELECT DISTINCT ivoid FROM rr.stc_temporal) AS q

(the subquery with the DISTINCT is necessary because a single resource can have multiple rows for time and spectrum when there’s multiple distinct intervals – think observation campaigns). If this gives you more than a few dozen rows when you read this, I strongly suspect it’s no longer 2018.

To improve this situation, the service operators need to provide the information on the coverage in their resource records. Indeed, the registry schemas already have the notion of a coverage, and the Note, in its core, simply proposes to add three elements to the coverage element of VODataService 1.1. Two of these new elements – the coverage in time and space – are simple floating-point intervals and can be repeated in order to allow non-contiguous coverage. The third element, the spatial coverage, uses a nifty data structure called a MOC, which expands to “HEALPix Multi-Order Coverage map” and is the main reason why I claim we can now pull off STC in the Registry: MOCs let databases and other programs easily and quickly manipulate areas on the sphere. Without MOCs, that’s a pain.

So, if you have registry records somewhere, please add the elements as soon as you can – if you don’t know how to make a MOC: CDS’ Aladin is there to help. In the end, your coverage elements should look somewhat like this:

<coverage>
  <spatial>3/336,338,450-451,651-652,659,662-663 
    4/1816,1818-1819,1822-1823,1829,1840-1841</spatial>
  <temporal>37190 37250</temporal>
  <temporal>54776 54802</temporal>
  <spectral>3.3e-07 6.6e-07</spectral>
  <spectral>2.0e-05 3.5e-06</spectral>
  <waveband>Optical</waveband>
  <waveband>Infrared</waveband>
</coverage>

The waveband elements are remainders from VODataService 1.1. They are still in use (prominently, for one, in SPLAT), and it’s certainly still a good idea to keep giving them for the forseeable future. You can also see how you would represent multiple observing campaigns and different spectral ranges.

Finally, if you’re running DaCHS and you’re using it to generate registry records (and there’s almost no excuse for not doing so), you can simply write a coverage element into your RD starting with DaCHS 1.2 (or, if you run betas, 1.1.1, which is already available). You’ll find lots of examples at the usual place. As a relatively interesting example, the resource descriptor of plts. It has this:


  <updater spaceTable="data" spectralTable="data" mocOrder="4"/>
  <spectral>3.3e-07 6.6e-07</spectral>
  <temporal>37190 37250</temporal>
  <temporal>38776 38802</temporal>
  <temporal>41022 41107</temporal>
  <temporal>41387 41409</temporal>
  <temporal>41936 41979</temporal>
  <temporal>43416 43454</temporal>
  <spatial>3/282,410 4/40,323,326,329,332,387,390,396,648-650,1083,1085,1087,1101-1103,1123,1125,1132-1134,1136,1138-1139,1144,1146-1147,1173-1175,1216-1217,1220,1223,1229,1231,1235-1236,1238,1240,1597,1599,1614,1634,1636,1728,1730,1737,1739-1740,1765-1766,1784,1786,2803,2807,2809,2812</spatial>
</coverage>

This particular service archives plate scans from the Palomar-Leiden Trojan surveys; these were looking for Trojan asteroids (of Jupiter) using the Palomar 122 cm Schmidt and were conducted in several shortish campaigns between 1960 and 1977 (incidentally, if you’re looking for things near the Ecliptic, this stuff might still hold valuable insights for you). Because the fill factor for the whole time period is rather small, I manually extracted the time coverage; for that, I ran select dateobs from plts.data via TAP and made the histogram plot above. Zooming in a bit, I read off the limits in TOPCAT’s coordinate display.

The other coverages, however, were put in automatically by DaCHS. That’s what the updater element does: for each axis, you can say where DaCHS should look, and it will then fill in the appropriate data from what it guesses gives the relevant coordiantes – that’s straightforward for standard tables like the ones behind SSAP and SIAP services (or obscore tables, for that matter), perhaps a bit more involved otherwise. To say “just do it for all axis”, give the updater a single sourceTable attribute.

Finally, in this case I’m overriding mocOrder, the order down to which DaCHS tries to resolve spatial features. I’m doing this here because in determining the coverage of image services DaCHS right now only considers the centers of the images, and that’s severely underestimating the coverage here, where the data products are the beautiful large Schmidt plates. Hence, I’m lowering the resolution from the default 6 (about one degree linearly) to still give some approximation to the actual data coverage. We’ll fix the underlying deficit as soon as pgsphere, the postgres extension which is actually dealing with all the MOCs, has support for turning circles and polygons into MOCs.

When you have defined an updater, just run dachs limits q.rd, and DaCHS will carefully (preserving your indentation) re-write the RD to contain what DaCHS has worked out from your table (but careful: it will overwrite what was previously there; so, make sure you only ask DaCHS to only deal with axes you’re not dealing with manually).

If you feel like writing code discovering holes in the intervals, ideally already in the database: that would be great, because the tighter the intervals defined, the fewer false positives people will have in data discovery.

The take-away for DaCHS operators is:

  1. Add STC coverage to your resources as soon as you’ve updated to DaCHS 1.2
  2. If you don’t have to have the tightest coverage declaration conceivable, all you have to do to have that is add
      <coverage>
        <updater sourceTable="my_table"/>
      </coverage>

    to your RD (where my_table is the id of your service’s “main” table) and then run dachs limits q.rd

  3. For special effects and further information, see Coverage Metadata in the DaCHS reference documentation
  4. If you have a nice postgres function that splits a simple coverage interval up so the filling factor of a set of new intervals increases (or know a nice, database-compatible algorithm to do so) – please let me know.

Say hello to RegTAP

[image: WIRR in the browser]
GAVO’s WIRR registry interface in action to find resources with radio parallaxes.

RegTAP is one of those standards that a scientist will normally not see – it works in the background and makes, for instance, TOPCAT display the Cone Search services matching some key words. And it’s behind the services like WIRR, our Web Interface to the Relational Registry (“Relational Registry” being the official name for RegTAP) that lets you do some interesting data discovery beyond what current clients support. In the screenshot above, for instance (try it yourself), I’m looking for cone search services having parallaxes presumably from radio observations. You could now transmit the services you’ve found to, say, TOPCAT or your own pyvo-based program to start querying them.

The key point this query is the use of UCDs – these let services declare fairly unambiguously what kind of physics (if you take that word with a grain of salt) they are talking about. In the example, pos.parallax means, well, a parallax, and the percent character is a wildcard (coming not from UCDs, but from ADQL). That wildcard is a good idea here because without it we might miss things like pos.parallax;obs and pos.parallax;stat.fit that people might have used to distinguish “raw” and ”processed” estimates.

UCDs are great for data discovery. Really.

Sometimes, however, clicking around in menus just isn’t good enough. That’s when you want the full power of RegTAP and write your very own queries. The good news: If you know ADQL (and you should!), you’re halfway there already.

Here’s one example of direct RegTAP use I came up with the other day. The use case was discovering data collections that give the effective temperatures of components of binary star systems.

If you check the UCD list, that “physics” translates into data that has columns with UCDs of phys.temperature and meta.code.multip at the same time. To translate that into a RegTAP query, have a look at the tables that make up a RegTAP service: its ”schema”. Section 8 of the standard lists all the tables there are, and there’s an ADASS poster that has an image of the schema with the more common columns illustrated. Oh, and if you’re new to RegTAP, you’re probably better off briefly studying the examples first to get a feeling for how RegTAP is supposed to work.

You will find that a pair of ivoid – the VO’s global resource identifier – and a per-resource table index uniquely identify a table within the entire registry. So, an ADQL query to pick out all tables containing temperatures and component identifiers would look like this:

SELECT DISTINCT ivoid, table_index
FROM
rr.table_column AS t1
JOIN rr.table_column AS t2
USING (ivoid, table_index)
WHERE t1.ucd='phys.temperature'
AND t2.ucd='meta.code.multip'

– the DISTINCT makes it so even tables that have lots of temperatures or codes only turn up once in our result set, and the somewhat odd self-join of the rr.table_column table with itself lets us say “make sure the two columns are actually in the same table”. Note that you could catch multi-table resources that define the components in one table and the temperatures in another by just joining on ivoid rather than ivoid and table_index.

You can run this query on any RegTAP endpoint: GAVO operates a small network of mirrors behind http://reg.g-vo.org/tap, there’s the ESAC one at http://registry.euro-vo.org/regtap/tap, and STScI runs one at http://vao.stsci.edu/RegTAP/TapService.aspx. Just use your usual TAP client.

But granted, the result isn’t terribly user-friendly: just identifiers and number. We’d at least like to see the names and descriptions of the tables so we know if the data is somehow relevant.

RegTAP is designed so you can locate the columns you would like to retrieve or constrain and then just NATURAL JOIN everything together. The table_description and table_name columns are in rr.res_table, so all it takes to see them is to take the query above and join its result like this:

SELECT table_name, table_description
FROM rr.res_table
NATURAL JOIN (
  SELECT DISTINCT ivoid, table_index
  FROM
  rr.table_column AS t1
  JOIN rr.table_column AS t2
  USING (ivoid, table_index)
  WHERE t1.ucd='phys.temperature'
  AND t2.ucd='meta.code.multip') as q

If you try this, you’ll see that we’d like to get the descriptions of the resources embedding the tables, too in order to get an idea what we can expect from a given data collection. And if we later want to find services exposing the tables (WIRR is nice for that – try the ivoid constraint –, but for this example all resources currently come from VizieR, so you can directly use VizieR’s TAP service to interact with the tables), you want the ivoids. Easy: Just join rr.resource and pick columns from there:

SELECT table_name, table_description, res_description, ivoid
FROM rr.res_table
NATURAL JOIN rr.resource
NATURAL JOIN (
  SELECT DISTINCT ivoid, table_index
  FROM
  rr.table_column AS t1
  JOIN rr.table_column AS t2
  USING (ivoid, table_index)
  WHERE t1.ucd='phys.temperature'
  AND t2.ucd='meta.code.multip') as q

If you’ve made it this far and know a bit of ADQL, you probably have all it really takes to solve really challenging data discovery problems – as far as Registry metadata reaches, that is, which currently does not include space-time coverage. But stay tuned, more on this soon.

In case you’re looking for a more systematic introduction into the world of the Registry and RegTAP, there are two… ouch. Can I really link to Elsevier papers? Well, here goes: 2014A&C…..7..101D (a.k.a. arXiv:1502.01186 on the Registry as such and 2015A%26C….11…91D (a.k.a. arXiv:1407.3083) mainly on RegTAP.

DaCHS 1.1 released

Today, I have released DaCHS 1.1, with the main selling point that DaCHS should now speak TAP 1.1 (as defined in the current draft).

First off, if you’re not yet on DaCHS 1.0, please read the corresponding release article before upgrading.

As usual, the general upgrading instructions are available in the operator’s guide (in short: do a dachs val ALL before the Debian upgrade). This time, I’d recommend to use the opportunity to upgrade your underlying server to stretch if you haven’t done so already. If you do that, please have a look at hints on postgres upgrades. Stretch comes with postgres 9.6 (jessie: 9.4). Postgres upgrades are generally safe, but please take a dump before migrating anyway.

So, with this out of the way, here’s a short list of the major changes from DaCHS 1.0 to DaCHS 1.1:

    9

    • 9

    • DaCHS now officially requires python 2.7. If this really is a problem for you, please shout – if wouldn’t be hard to maintain 2.6 compatibility, but by now we feel there’s no reason to bother any more.
    • 9

    • Now supporting TAP 1.1; in particular, TOP n doesn’t trump MAXREC any more, and it doesn’t affect OVERFLOW indication, which may break things that used TOP to override DaCHS’ default TAP match limit of 2000. Also, TAP_SCHEMA is updated (this happens as a side effect of dachs upgrade).
    • 9

    • Now serialising spoint, scircle, and friends to DALI 1.1 xtypes (timestamp, point, polygon, circle). Fields explicitly marked with adql:POINT or adql:REGION will still be serialised to STC-S. Do this only if you have no choice (DaCHS has this for obscore and epntap s_region right now).
    • 9

    • The output column selection is sanitised. This may make for slight changes in service responses, in particular in VOTable formats. See Output Tables in the reference documentation for details if you think this might hit you.
    • 9

    • DaCHS no longer comes with an outdated version pyparsing and instead uses what’s installed on the system. The Debian package further re-uses additional system resources if available (rjsmin, jquery).
    • 9

    • DaCHS now tries a bit harder to come up with sensible names for SODA result files.
    • 9

    • map/@source is no longer limited to identifier-like strings; any key that’s in your source is fair game.
    • 9

    • For incremental imports with data that’s updated now and then, there’s now ignoreSources/@fromdbUpdating.
    • 9

    • Relative imports from custom code (“import foo” in a custom core, for instance, getting res/foo.py) no longer work. See Importing Modules in the reference documentation for details.
    • 9

    • This release fixes a severe bug in the creation of obscore metadata from SSAP tables. If you use //obscore#publishSSAPHCD or //obscore#publishSSAPMIXC mixins, update the obscore definitions by running dachs imp -m <rdid>, followed by dachs imp //obscore (the latter is only necessary once at the end).
    • 9

    • You can now define a footer.html template that’s added at the foot of the main page content – with a bit of CSS magic, this lets you overwrite almost anything on DaCHS HTML pages.

    As always, please complain early if something breaks for you; our regression tests can only cover so much. In particular, our support list is there for you.

    Update (2017-12-06): In particular on jessie, you may see that all DaCHS packages are being held back. To resolve this situation, manually say apt-get install python-gavoutils python-gavostc.

Heidelberg Data Center Down^WUp again

Well, it has happened – perhaps it was the strain of restoring a couple of terabyte of data (as reported yesterday), perhaps it’s uncorrelated, but our main database server’s RAID threw errors and then disappeared from the SCSI bus today at about 15:03 UTC.

This means that all services from http://dc.g-vo.org are broken for the moment. We’re sorry, and we will try to at least limp on as fast as possible.

Update (2017-11-13, 14:30 UTC): Well, it’s official. What’s broken is the lousy Adaptec controller – whatever configuration we tried, it can’t talk to its backplane any more. Worse, we don’t have a spare part for that piece here. We’re trying to get one as quickly as possible, but even medium-sized shops don’t have multi-channel SAS controllers in stock, so it’ll have to be express mail.

Of course, the results of the weekend’s restore are lost; so, we’ll need about 24 hours of restore again to get up to 90% of the services after the box is back up, with large tables being restored after that. Again, we’re unhappy about the long downtime, but it could only have been averted by having a hot spare, which for this kind of infrastructure just wouldn’t have been justifiable over the last ten years.

Another lesson learned: Hardware RAID sucks. It was really hard to analyse the failure, and the messages of the controller BIOS were completely unhelpful. We, at least, will migrate to JBOD (one of the cool IT acronyms with a laid-back expansion: Just a Bunch Of Disks) and software RAID.

And you know what? At least the box had two power supplies. If these weren’t redundant, you bet the power supply would have failed.

To give you an idea how bad things are, here is the open server with the controller card that probably caused the mayhem (left), and 12 TB of fast disk, yearning for action (right).

[A database server in pieces]

Update (2017-11-14, 12:21 UTC): We’re cursed. The UPS guys with the new controller were in the main institute building. They claimed they couldn’t find anyone. Ok, our janitor is on sick leave, and it was lunch break, but still. It can’t be that hard to see walk up a single flight of steps. Do we really have to wait another day?

Update (2017-11-14, 14:19 UTC): Well, UPS must have read this – or the original delivery report was bogus. Anyway, not an hour after the last entry the delivery status changed to “delivered”, and there the thing was in our mailbox.

Except – it wasn’t the controller in the first place. It turned out that, in fact, four disks had failed at the same time. It’s hard to believe but that’s what it is. Seems we’ll have to step carefully until the disks are replaced. We’ll run a thorough check tonight while we prepare the database tables.

Unless more disaster strikes, we should be back by tomorrow morning CET – but without the big tables, and I’m not sure yet whether I dare putting them in on these flimsy, enterprise-class, 15k, SAS disks. Well, I give you they’ve run for five years now.

Update (2017-11-15, 14:37 UTC): After a bit more consideration, I figured I wouldn’t trust the aging enterprise disks any more. Our admins then gave me a virtual machine on one of their boxes that should be powerful enough to keep the data center afloat for a while. So, the data center is back up at 90% (counting by the number of regression tests still failing) since an hour ago or so.

Again, the big tables are missing (and a few obscure services the RDs of which showed bitrot and need polishing); they should come in over the next days, one by one; provided the VM isn’t much slower than our DB server, you should see about two of them come in per day, with my planned sequence being hsoy, ppmxl, gps1, gaia, 2mass, sdssdr7, urat1, wise, ucac5, ucac4, rosat, ucac3, mwsc, mwsc-e14a, usnob, supercosmos.

Feel free to vote tables up if you severely miss a table.

And all this assumes no further disaster strikes…

Update (2017-11-16, 9:22 UTC): Well, it ain’t pretty. The first large catalog, HSOY, is finally in, and the CLUSTER operation ((which dominates restore time) took almost 12 hours; and HSOY, at 0.5 Gigarecord, isn’t all that large. So, our replacement machine really is a good deal slower than our normal database server that did that operation in less than three hours. I guess you’ll want to do your large-table queries on a different service for the next couple of weeks. Use the Registry!

Update (2017-11-20, 9:05 UTC): With a bit more RAM (DaCHS operators: version 1.1 will have a new configuration item for indexing work memory!), things have been going faster over the weekend. We’re now down to 15 regression tests failing (of 330), with just 4 large catalogs missing still, and then a few nitty-gritty, almost invisible tables still needing some manual work.

Update (2017-11-23, 14:51 UTC): Only 10 regression tests are still failing, but progress has become slow again – the machine has been clustering supercosmos.data for the last 36 hours now; it’s not that huge a table, so it’s a bit hard to understand why this table is holding up things so much. On the plus side, new SSDs for our database server are being shipped, so we should see faster operation soon.

Update (2017-12-01, 13:05 UTC): We’ve just switched back the database server back to our own server with its fresh SSDs. A few esoteric big tables are yet missing, but we’d say the crisis is over. Hence, that’s the last update. Thank you for your attention.