Horror vacui begone

browser and editor
Mikhail’s qrdcreator in a browser and an editor with a dachs start-produced template.

One of the major usability issues our publishing suite DaCHS has for operators (i.e., people who want publish data) is the “horror vacui”: How do I start a Resource Descriptor (RD – the file DaCHS interprets to build services)?

I used to recommend to start by having a look at the RDs of our existing services and pick whatever matches best your publication project. But finding a matching service and figuring out what is generic, what’s a special property of the concrete data collection, and what’s a hack that should not be reproduced isn’t straightforward at all, not to mention the fact that some of those RDs have been in maintenance mode for almost 10 years and hence may show deprecated practices.

Then came the the VESPA implementation workshop last year, during which Mikhail Minin showed me a piece of javascript and HTML (source on github) he has written to overcome the empty editor window. Essentially, Mikhail has built a fairly comprehensive form interface in a web browser that asks people the right questions to eventually write an RD for EPN-TAP (i.e., solar system) resources.

I had planned to generalise Mikhail’s approach to several types of resources supported by DaCHS, ideally inferring the questions to ask from the built-in documentation of mixins and applys. But during the last year, whenever I felt it would be a good time to tackle that generalisation, I quickly gave up again. It was mostly rather trivial stuff such as how to tell apart repeatable metadata (waveband, say) and non-repeatable metadata (instrument, say). But it was bad enough that I quickly found something else to do each time I got started.

Eventually, I gave up on a menu interface altogether – making it flexible and generatable at the same time seemed a fairly complex problem. But that doesn’t mean I forgot about overcoming the horror vacui thing. So, when forms aren’t flexible enough for data entry, where do you turn? Right! A text editor.

Enter dachs start. That’s a new DaCHS subcommand that gets you started with your RD. For one, you can list the templates available:

$ dachs start list
siap -- Image collections via SIAP1 and TAP
ssap+datalink -- Spectra via SSAP and TAP, going through datalink
epntap -- Solar system data via EPN-TAP 2.0
scs -- Catalogs via SCS and TAP

More templates are planned; siap+datalink, for instance, would cover some frequent use cases. Feel free to mail in requests.

Once you find a suitable template, create your future resource directory, enter it and run dachs start again, this time passing the name of the template you want:

$ mkdir ex_data
$ cd ex_data
$ dachs start scs
$ head -16 q.rd | tail -9
<resource schema="ex_data">
  <meta name="creationDate">2018-04-13T12:34:31Z</meta>

  <meta name="title">%title -- not more than a line%</meta>
  <meta name="description">
    %this should be a paragraph or two (take care to mention salient terms)%
  </meta>
  <!-- Take keywords from 
    http://astrothesaurus.org/thesaurus/hierarchical-browse/

dachs start uses the directory name as the new schema name and then writes a file q.rd (which is the canonical name for the “main” RD in a resource). Within this file, you’ll see things to fill out between pairs of percent signs with short explanantions. Where longer explanations are necessary, embedded comments should help.

To give you an idea of the intended use: As a vim user, I’ve put

augroup rd
  au!
  au BufRead,BufNewFile *.rd imap  /%[^%]*%a
  au BufRead,BufNewFile *.rd imap  cf%
augroup END

into my ~/.vimrc. That way, while editing the template into an actual RD, hitting F8 takes me to the next thing to be edited; I can then read the instructions, and when I have made up my mind, I can either delete the template element or hit F9 and replace the explanation text with whatever belongs there.

The command is available starting with the 1.1.3 beta (available now by switching to the beta repo) and will be part of the 1.2 release, planned for early June after the Victoria interop.

If you have a publication project: just try it out and give feedback. Note that the templates haven’t actually been tested yet, and the comments were written by a DaCHS and VO nerd, so they might not always be great either. Thus, when you get stuck: complain early, complain often!

Speak out on ADQL 2.1

If you’ve always wanted to be part of a standardisation process within the IVOA (and who would not?), the time has rarely been as good as now. Because: We’re updating ADQL! Yes! The ADQL you are writing your queries in will receive a few more language elements, and we’re carefully trying to heal a few things that turned out to be warts. And while some of the changes are as dull and boring as you may expect standards work to be, on some of them you may wish to have a saying.

Also, you can try things out – the GAVO data center TAP endpoint at http://dc.g-vo.org/tap already has most of the proposed features, and the new DaCHS beta 1.1.2 (out since last Friday) does, too. So, if you’re running DaCHS yourself, you can start playing after switching to the beta repository.

What’s new?

  • You’re now supposed to write the standard crossmatch as DISTANCE(ra1, dec1, ra2, dec2)<dist. This replaces the old dance with 1=CONTAINS(POINT(), CIRCLE()) that you’ve probably learned to hate. Finally: Crossmatching without having to resort to TOPCAT’s example menu…
  • ADQL geometries used to require a first argument that would give the reference frame, as in POINT('ICRS', ra, dec). The hope was that services could then automagically make a statement like CONTAINS(point_in_icrs, circle_in_galactic) work as presumably intended. Few services ever did (DaCHS still tries reasonably hard), and when they did, there were all kinds of opaque oddities. One of the most common sources of confusion is the question what a service is supposed to do with POINT('GALACTIC', ra, dec), assuming it knows that ra and dec are in, say, B1950 FK4. Also, is there any expectation that services attempt to do anything beyond a simple rotation (FK4, for instance, rotates noticably against the ICRS, so proper motions would need to get fixed, too)? In all, the frame as a first argument was ill thought-out, and it’s been deprecated. Simply don’t put in the string-typed first argument any more. POINT(long, lat) does it. True: This, more than ever, calls for an ADQL astrometry library so you can easily convert, at least, between Galactic and ICRS (probably a few more would be useful, too). More on this in some future post.
  • Services should have CAST now. Sometimes you want to turn a number into a string or a string into a timestamp. In such cases, you can write CAST('1991-02-01', TIMESTAMP) now. The details are not quite, excuse me, cast in stone yet, so if you have a use case for this kind of thing, speak up now. The current draft also calls for a TIMESTAMP(tx) function – but since that’s really not different from CAST(tx, TIMESTAMP), I’m trying to dissuade people from adding it.
  • Services should have an IN_UNIT function now. That’s a nifty thing in particular when you’re re-using queries on different services. Just write, say, IN_UNIT(pmra, 'deg/yr') and never worry again if it’s arcsec/yr, mas/yr, rad/cy, or whatever. The second argument, by the way, is written according to the Units in the Virtual Observatory standard. It’s an optional feature according to the current standard, so perhaps it’s too early to party, but I’ve found this extremely useful, and so I hope we’ll see widespread adoption.
  • Services should now have set operations. These are UNION, EXCEPT, and INTERSECT and are useful when you have two queries that result in the same table schema (because they won’t work otherwise). Say you have two complex ways to filter rows from the table source, but you want to process both sorts of results further on – you can say then say something like
      SELECT <whatever complex> FROM
          (SELECT a,b,c FROM source
            WHERE <crazy stuff>
            GROUP BY a, b, c) as left
        UNION
          (SELECT a,b,c FROM source
            WHERE <other crazy stuff>
            GROUP BY a, b, c) as right
      WHERE <more complex stuff over a, b, and c>
    

    – and similarly, EXCEPT lets you “punch a hole” in a result table. Another interesting use case would be to query many tables on a service like VizieR in one go; that still works if you make sure the tables defined by the sub-queries have the same columns. Given that a lot of cross-table operations actually boil down to JOINs and WHERE clauses, the set operations are used less that one would expect. But if you need them, there’s no real alternative (short of downloading far too much and performing the operation locally, which of course defeats the purpose of TAP).

  • Common table expressions (“WITH”). DaCHS doesn’t do these yet, and it will only pick them up if someone else implements them first. In the way ADQL 2.1 has them (“nonrecursive”), CTEs are little more than syntactic sugar, and I’m not quite sure if the additional implementation complexity is worth it. If you’re curious, check CTEs in the postgres manual. If that makes you drool for WITH in ADQL, let me know. It’ll not be too hard to sway me to put them in.
  • Bitwise Operations. That’s when integers are treated as bit patterns. If this sounds like nerd stuff to you, well, it happens quite a bit in actual catalogs. See, for instance, Note 3 for the PPMXL. You’d need the flags column described there if you wanted to exclude PPMXL objects that replaced multiple USNO-B1.0 objects (bit 3), you will right now have to write something like MOD(flags,16)>7. That’s a bit of magic that everyone will have to think about for a while. With bitwise operations, you’ll just write BITWISE_AND(flags,8)=8, which will look familiar to everyone who has used the pattern before (in particular, it’s clear we’re talking about bit 3). There still is discussion whether bitwise operations are common enough to warrant special syntax – the draft currently says the above should be written as flags&8=8 – or whether the functions DaCHS has at the moment (they’re called BITWISE_AND, BITWISE_OR, BITWISE_XOR, and BITWISE_NOT) are good enough.
  • Offset. If you’ve ever done anything with ADQL, you’ll know that SELECT TOP 10 * FROM hipparcos.main ORDER BY parallax DESC will give you the 10 objects with the larges parallaxes. But what if you want the next but 10 closest stars? Well, OFFSET to the rescue:
    SELECT TOP 10 * 
    FROM hipparcos.main 
    ORDER BY parallax DESC
    OFFSET 10
    

    There is another, more sinister, application for OFFSET, which happens to be the actual reason I’ve put it into DaCHS’ ADQL ages ago: Written as OFFSET 0 several databases use it to denote a barries for the query planner. This is explained to some degree in the class DaCHS TAP example Crossmatch for a Guide Star – which still mentions the first hack I had built into DaCHS to let query authors rein in overzealous query planners.

  • LOWER and ILIKE. ADQL has been extremely weak on the side of text processing, so weak indeed that it wasn’t nearly enough to cover the use cases for the registry when it moved to RegTAP. ADQL 2.1 adds two basic features – LOWER, a function that lets people query in a case-insensitive fashion, and ILIKE, an operator that is like LIKE, but again ignores case. While both features are obviously great as soon as people dump any kind of text (think object names) into their databases, I’m not terribly happy with ILIKE, as it does the same as RegTAP’s ivoa_nocasematch user defined function, and it’s always bad when a two standards forsee two different mechanisms for the same thing.
  • Geometry-typed arguments. CIRCLE and POLYGON now accept POINTs in alternative constructor functions. That is, you can now say CIRCLE(POINT(ra, dec), radius) in addition to the traditional CIRCLE(ra, dec, radius). In itself, that’s probably not terribly exciting, but when you have actual POINTs in your database, it’s much more compact to write, say,
      SELECT *
      FROM zcosmos.data
      WHERE 0=CONTAINS(
        ssa_targetpos,
        CIRCLE(ssa_location, ssa_aperture))
    

    (which would return rows for those spectra for which the declared aperture does not contain the declared target). Before, you’d had to write some fairly ugly expression involving COORD1 and whatnot in order to achieve the same effect.

  • Boolean expressions. That’s another one that’s still a bit up in the air. First, the rough goal is to allow boolean values in ADQL-accessible tables, which so far have been a hack at best. In the future, you should be able to say WHERE is_broken=True. However, people coming from other languages will find that odd, and indeed, in python I’d cringe on if is_broken==True:. What I’d expect is if is_broken:. Do we want this in ADQL? Currently, it’s in the grammar (more or less like this), but this kind of thing makes it still harder to produce useful syntax error messages. Is it worth it, either way? I’m not sure.

That about concludes my quick review of the new features of ADQL 2.1. If you’d like to know more, the current draft is on the IVOA document repository, and if you can deal with version control (you should!), you can follow the bleeding edge in the ADQL document in Volute. Discussion happens on the DAL mailing list.

Update (2018-04-13): Well, as to the CTEs, I couldn’t resist after all, and they’re in with DaCHS 1.1.3. And I have to say a love them — they weren’t hard to put in, and once they’re there they make so many queries a good deal more readable than before. I’ve even put it a server-defined example for CTEs on the Heidelberg TAP service showcasing a particularly compelling use case.

Space and Time not lost on the Registry

Histogram: observation dates of an image service

A histogram of times for which the Palomar-Leiden service has images: That’s temporal service coverage right there.

If you are an astronomer and you’ve ever tried looking for data in the Virtual Observatory Registry, chances are you have wondered “Why can’t I enter my position here?” Or perhaps “So, I’m looking for images in [NIII] – where would I go?”

Both of these are examples for the use of Space-Time Coordinates (STC) in data discovery – yes, spectral coordinates count as STC, too, and I could make an argument for it. But this post is about something else: None of this has worked in the Registry up to now.

It’s time to mend this blatant omission. To take the next steps, after a bit of discussion on some of the IVOA’s mailing lists, I have posted an IVOA note proposing exactly those last Thursday. It is, perhaps with a bit of over-confidence, called A Roadmap for Space-Time Discovery in the VO Registry. And I’d much appreciate feedback, in particular if you are a VO user and have ideas on what you’d like to do with such a facility.

In this post, I’d like to give a very quick run-down on what is in it for (1) VO users, (2) service operators in general, and (3) service operators who happen to run DaCHS.

First, users. We already are pretty good on spatial coverage (for about 13000 of almost 20000 resources), so it might be worth experimenting with that. For now, the corresponding table is only available on the RegTAP mirror at http://dc.g-vo.org/tap. There, you can try queries like

select ivoid from
rr.table_column
natural join rr.stc_spatial
where
  1=contains(gavo_simbadpoint('HDF'), coverage)
  and ucd like 'phot.flux;em.radio%'

to find – in this case – services that have radio fluxes in the area of the Hubble Deep Field. If these lines scare you or you don’t know what to do with the stupid ivoids, check the previous post on this blog – it explains a bit more about RegTAP and why you might care.

Similarly cool things will, hopefully, some day be possible in spectrum and time. For instance, if you were interested in SII fluxes in the crab nebula in the early sixties, you could, some day, write

SELECT ivoid FROM
rr.stc_temporal
NATURAL JOIN rr.stc_spectral
NATURAL JOIN rr.stc_spatial
WHERE
  1=CONTAINS(gavo_simbadpoint('M1'), coverage)
  AND 1=ivo_interval_overlaps(
    6.69e-7, 6.75e-7, 
    wavelength_start, wavelength_end)
  AND 1=ivo_interval_overlaps(
    36900, 38800,
    time_start, time_end)

As you can see, the spectral coordiate will, following (admittedly broken) VO convention, be given in meters of vacuum wavelength, and time in MJD. In particular the thing with the wavelength isn’t quite settled yet – personally, I’d much rather have energy there. For one, it’s independent of the embedding medium, but much more excitingly, it even remains somewhat sensible when you go to non-electromagnetic messengers.

A pattern I’m trying to establish is the use of the user-defined function ivo_interval_overlaps, also defined in the Note. This is intended to allow robust query patterns in the presence of two intrinsically interval-valued things: The service’s coverage and the part of the spectrum you’re interested in, say. With the proposed pattern, either of these can degenerate to a single point and things still work. Things only break when both the service and you figure that “Aw, Hα is just 656.3 nm” and one of you omits a digit or adds one.

But that’s academic at this point, because really few resources define their coverage in time and and spectrum. Try it yourself:

SELECT COUNT(*) FROM (
  SELECT DISTINCT ivoid FROM rr.stc_temporal) AS q

(the subquery with the DISTINCT is necessary because a single resource can have multiple rows for time and spectrum when there’s multiple distinct intervals – think observation campaigns). If this gives you more than a few dozen rows when you read this, I strongly suspect it’s no longer 2018.

To improve this situation, the service operators need to provide the information on the coverage in their resource records. Indeed, the registry schemas already have the notion of a coverage, and the Note, in its core, simply proposes to add three elements to the coverage element of VODataService 1.1. Two of these new elements – the coverage in time and space – are simple floating-point intervals and can be repeated in order to allow non-contiguous coverage. The third element, the spatial coverage, uses a nifty data structure called a MOC, which expands to “HEALPix Multi-Order Coverage map” and is the main reason why I claim we can now pull off STC in the Registry: MOCs let databases and other programs easily and quickly manipulate areas on the sphere. Without MOCs, that’s a pain.

So, if you have registry records somewhere, please add the elements as soon as you can – if you don’t know how to make a MOC: CDS’ Aladin is there to help. In the end, your coverage elements should look somewhat like this:

<coverage>
  <spatial>3/336,338,450-451,651-652,659,662-663 
    4/1816,1818-1819,1822-1823,1829,1840-1841</spatial>
  <temporal>37190 37250</temporal>
  <temporal>54776 54802</temporal>
  <spectral>3.3e-07 6.6e-07</spectral>
  <spectral>2.0e-05 3.5e-06</spectral>
  <waveband>Optical</waveband>
  <waveband>Infrared</waveband>
</coverage>

The waveband elements are remainders from VODataService 1.1. They are still in use (prominently, for one, in SPLAT), and it’s certainly still a good idea to keep giving them for the forseeable future. You can also see how you would represent multiple observing campaigns and different spectral ranges.

Finally, if you’re running DaCHS and you’re using it to generate registry records (and there’s almost no excuse for not doing so), you can simply write a coverage element into your RD starting with DaCHS 1.2 (or, if you run betas, 1.1.1, which is already available). You’ll find lots of examples at the usual place. As a relatively interesting example, the resource descriptor of plts. It has this:


  <updater spaceTable="data" spectralTable="data" mocOrder="4"/>
  <spectral>3.3e-07 6.6e-07</spectral>
  <temporal>37190 37250</temporal>
  <temporal>38776 38802</temporal>
  <temporal>41022 41107</temporal>
  <temporal>41387 41409</temporal>
  <temporal>41936 41979</temporal>
  <temporal>43416 43454</temporal>
  <spatial>3/282,410 4/40,323,326,329,332,387,390,396,648-650,1083,1085,1087,1101-1103,1123,1125,1132-1134,1136,1138-1139,1144,1146-1147,1173-1175,1216-1217,1220,1223,1229,1231,1235-1236,1238,1240,1597,1599,1614,1634,1636,1728,1730,1737,1739-1740,1765-1766,1784,1786,2803,2807,2809,2812</spatial>
</coverage>

This particular service archives plate scans from the Palomar-Leiden Trojan surveys; these were looking for Trojan asteroids (of Jupiter) using the Palomar 122 cm Schmidt and were conducted in several shortish campaigns between 1960 and 1977 (incidentally, if you’re looking for things near the Ecliptic, this stuff might still hold valuable insights for you). Because the fill factor for the whole time period is rather small, I manually extracted the time coverage; for that, I ran select dateobs from plts.data via TAP and made the histogram plot above. Zooming in a bit, I read off the limits in TOPCAT’s coordinate display.

The other coverages, however, were put in automatically by DaCHS. That’s what the updater element does: for each axis, you can say where DaCHS should look, and it will then fill in the appropriate data from what it guesses gives the relevant coordiantes – that’s straightforward for standard tables like the ones behind SSAP and SIAP services (or obscore tables, for that matter), perhaps a bit more involved otherwise. To say “just do it for all axis”, give the updater a single sourceTable attribute.

Finally, in this case I’m overriding mocOrder, the order down to which DaCHS tries to resolve spatial features. I’m doing this here because in determining the coverage of image services DaCHS right now only considers the centers of the images, and that’s severely underestimating the coverage here, where the data products are the beautiful large Schmidt plates. Hence, I’m lowering the resolution from the default 6 (about one degree linearly) to still give some approximation to the actual data coverage. We’ll fix the underlying deficit as soon as pgsphere, the postgres extension which is actually dealing with all the MOCs, has support for turning circles and polygons into MOCs.

When you have defined an updater, just run dachs limits q.rd, and DaCHS will carefully (preserving your indentation) re-write the RD to contain what DaCHS has worked out from your table (but careful: it will overwrite what was previously there; so, make sure you only ask DaCHS to only deal with axes you’re not dealing with manually).

If you feel like writing code discovering holes in the intervals, ideally already in the database: that would be great, because the tighter the intervals defined, the fewer false positives people will have in data discovery.

The take-away for DaCHS operators is:

  1. Add STC coverage to your resources as soon as you’ve updated to DaCHS 1.2
  2. If you don’t have to have the tightest coverage declaration conceivable, all you have to do to have that is add
      <coverage>
        <updater sourceTable="my_table"/>
      </coverage>

    to your RD (where my_table is the id of your service’s “main” table) and then run dachs limits q.rd

  3. For special effects and further information, see Coverage Metadata in the DaCHS reference documentation
  4. If you have a nice postgres function that splits a simple coverage interval up so the filling factor of a set of new intervals increases (or know a nice, database-compatible algorithm to do so) – please let me know.

Say hello to RegTAP

[image: WIRR in the browser]
GAVO’s WIRR registry interface in action to find resources with radio parallaxes.

RegTAP is one of those standards that a scientist will normally not see – it works in the background and makes, for instance, TOPCAT display the Cone Search services matching some key words. And it’s behind the services like WIRR, our Web Interface to the Relational Registry (“Relational Registry” being the official name for RegTAP) that lets you do some interesting data discovery beyond what current clients support. In the screenshot above, for instance (try it yourself), I’m looking for cone search services having parallaxes presumably from radio observations. You could now transmit the services you’ve found to, say, TOPCAT or your own pyvo-based program to start querying them.

The key point this query is the use of UCDs – these let services declare fairly unambiguously what kind of physics (if you take that word with a grain of salt) they are talking about. In the example, pos.parallax means, well, a parallax, and the percent character is a wildcard (coming not from UCDs, but from ADQL). That wildcard is a good idea here because without it we might miss things like pos.parallax;obs and pos.parallax;stat.fit that people might have used to distinguish “raw” and ”processed” estimates.

UCDs are great for data discovery. Really.

Sometimes, however, clicking around in menus just isn’t good enough. That’s when you want the full power of RegTAP and write your very own queries. The good news: If you know ADQL (and you should!), you’re halfway there already.

Here’s one example of direct RegTAP use I came up with the other day. The use case was discovering data collections that give the effective temperatures of components of binary star systems.

If you check the UCD list, that “physics” translates into data that has columns with UCDs of phys.temperature and meta.code.multip at the same time. To translate that into a RegTAP query, have a look at the tables that make up a RegTAP service: its ”schema”. Section 8 of the standard lists all the tables there are, and there’s an ADASS poster that has an image of the schema with the more common columns illustrated. Oh, and if you’re new to RegTAP, you’re probably better off briefly studying the examples first to get a feeling for how RegTAP is supposed to work.

You will find that a pair of ivoid – the VO’s global resource identifier – and a per-resource table index uniquely identify a table within the entire registry. So, an ADQL query to pick out all tables containing temperatures and component identifiers would look like this:

SELECT DISTINCT ivoid, table_index
FROM
rr.table_column AS t1
JOIN rr.table_column AS t2
USING (ivoid, table_index)
WHERE t1.ucd='phys.temperature'
AND t2.ucd='meta.code.multip'

– the DISTINCT makes it so even tables that have lots of temperatures or codes only turn up once in our result set, and the somewhat odd self-join of the rr.table_column table with itself lets us say “make sure the two columns are actually in the same table”. Note that you could catch multi-table resources that define the components in one table and the temperatures in another by just joining on ivoid rather than ivoid and table_index.

You can run this query on any RegTAP endpoint: GAVO operates a small network of mirrors behind http://reg.g-vo.org/tap, there’s the ESAC one at http://registry.euro-vo.org/regtap/tap, and STScI runs one at http://vao.stsci.edu/RegTAP/TapService.aspx. Just use your usual TAP client.

But granted, the result isn’t terribly user-friendly: just identifiers and number. We’d at least like to see the names and descriptions of the tables so we know if the data is somehow relevant.

RegTAP is designed so you can locate the columns you would like to retrieve or constrain and then just NATURAL JOIN everything together. The table_description and table_name columns are in rr.res_table, so all it takes to see them is to take the query above and join its result like this:

SELECT table_name, table_description
FROM rr.res_table
NATURAL JOIN (
  SELECT DISTINCT ivoid, table_index
  FROM
  rr.table_column AS t1
  JOIN rr.table_column AS t2
  USING (ivoid, table_index)
  WHERE t1.ucd='phys.temperature'
  AND t2.ucd='meta.code.multip') as q

If you try this, you’ll see that we’d like to get the descriptions of the resources embedding the tables, too in order to get an idea what we can expect from a given data collection. And if we later want to find services exposing the tables (WIRR is nice for that – try the ivoid constraint –, but for this example all resources currently come from VizieR, so you can directly use VizieR’s TAP service to interact with the tables), you want the ivoids. Easy: Just join rr.resource and pick columns from there:

SELECT table_name, table_description, res_description, ivoid
FROM rr.res_table
NATURAL JOIN rr.resource
NATURAL JOIN (
  SELECT DISTINCT ivoid, table_index
  FROM
  rr.table_column AS t1
  JOIN rr.table_column AS t2
  USING (ivoid, table_index)
  WHERE t1.ucd='phys.temperature'
  AND t2.ucd='meta.code.multip') as q

If you’ve made it this far and know a bit of ADQL, you probably have all it really takes to solve really challenging data discovery problems – as far as Registry metadata reaches, that is, which currently does not include space-time coverage. But stay tuned, more on this soon.

In case you’re looking for a more systematic introduction into the world of the Registry and RegTAP, there are two… ouch. Can I really link to Elsevier papers? Well, here goes: 2014A&C…..7..101D (a.k.a. arXiv:1502.01186 on the Registry as such and 2015A%26C….11…91D (a.k.a. arXiv:1407.3083) mainly on RegTAP.

DaCHS 1.1 released

Today, I have released DaCHS 1.1, with the main selling point that DaCHS should now speak TAP 1.1 (as defined in the current draft).

First off, if you’re not yet on DaCHS 1.0, please read the corresponding release article before upgrading.

As usual, the general upgrading instructions are available in the operator’s guide (in short: do a dachs val ALL before the Debian upgrade). This time, I’d recommend to use the opportunity to upgrade your underlying server to stretch if you haven’t done so already. If you do that, please have a look at hints on postgres upgrades. Stretch comes with postgres 9.6 (jessie: 9.4). Postgres upgrades are generally safe, but please take a dump before migrating anyway.

So, with this out of the way, here’s a short list of the major changes from DaCHS 1.0 to DaCHS 1.1:

    • DaCHS now officially requires python 2.7. If this really is a problem for you, please shout – if wouldn’t be hard to maintain 2.6 compatibility, but by now we feel there’s no reason to bother any more.
    • Now supporting TAP 1.1; in particular, TOP n doesn’t trump MAXREC any more, and it doesn’t affect OVERFLOW indication, which may break things that used TOP to override DaCHS’ default TAP match limit of 2000. Also, TAP_SCHEMA is updated (this happens as a side effect of dachs upgrade).
    • Now serialising spoint, scircle, and friends to DALI 1.1 xtypes (timestamp, point, polygon, circle). Fields explicitly marked with adql:POINT or adql:REGION will still be serialised to STC-S. Do this only if you have no choice (DaCHS has this for obscore and epntap s_region right now).
    • The output column selection is sanitised. This may make for slight changes in service responses, in particular in VOTable formats. See Output Tables in the reference documentation for details if you think this might hit you.
    • DaCHS no longer comes with an outdated version pyparsing and instead uses what’s installed on the system. The Debian package further re-uses additional system resources if available (rjsmin, jquery).
    • DaCHS now tries a bit harder to come up with sensible names for SODA result files.
    • map/@source is no longer limited to identifier-like strings; any key that’s in your source is fair game.
    • For incremental imports with data that’s updated now and then, there’s now ignoreSources/@fromdbUpdating.
    • Relative imports from custom code (“import foo” in a custom core, for instance, getting res/foo.py) no longer work. See Importing Modules in the reference documentation for details.
    • This release fixes a severe bug in the creation of obscore metadata from SSAP tables. If you use //obscore#publishSSAPHCD or //obscore#publishSSAPMIXC mixins, update the obscore definitions by running dachs imp -m <rdid>, followed by dachs imp //obscore (the latter is only necessary once at the end).
    • You can now define a footer.html template that’s added at the foot of the main page content – with a bit of CSS magic, this lets you overwrite almost anything on DaCHS HTML pages.

    As always, please complain early if something breaks for you; our regression tests can only cover so much. In particular, our support list is there for you.

    Update (2017-12-06): In particular on jessie, you may
    see that all DaCHS packages are being held back. To resolve this
    situation, manually say apt-get install python-gavoutils<br />
    python-gavostc
    .

Heidelberg Data Center Down^WUp again

Well, it has happened – perhaps it was the strain of restoring a couple of terabyte of data (as reported yesterday), perhaps it’s uncorrelated, but our main database server’s RAID threw errors and then disappeared from the SCSI bus today at about 15:03 UTC.

This means that all services from http://dc.g-vo.org are broken for the moment. We’re sorry, and we will try to at least limp on as fast as possible.

Update (2017-11-13, 14:30 UTC): Well, it’s official. What’s broken is the lousy Adaptec controller – whatever configuration we tried, it can’t talk to its backplane any more. Worse, we don’t have a spare part for that piece here. We’re trying to get one as quickly as possible, but even medium-sized shops don’t have multi-channel SAS controllers in stock, so it’ll have to be express mail.

Of course, the results of the weekend’s restore are lost; so, we’ll need about 24 hours of restore again to get up to 90% of the services after the box is back up, with large tables being restored after that. Again, we’re unhappy about the long downtime, but it could only have been averted by having a hot spare, which for this kind of infrastructure just wouldn’t have been justifiable over the last ten years.

Another lesson learned: Hardware RAID sucks. It was really hard to analyse the failure, and the messages of the controller BIOS were completely unhelpful. We, at least, will migrate to JBOD (one of the cool IT acronyms with a laid-back expansion: Just a Bunch Of Disks) and software RAID.

And you know what? At least the box had two power supplies. If these weren’t redundant, you bet the power supply would have failed.

To give you an idea how bad things are, here is the open server with the controller card that probably caused the mayhem (left), and 12 TB of fast disk, yearning for action (right).

[A database server in pieces]

Update (2017-11-14, 12:21 UTC): We’re cursed. The UPS guys with the new controller were in the main institute building. They claimed they couldn’t find anyone. Ok, our janitor is on sick leave, and it was lunch break, but still. It can’t be that hard to see walk up a single flight of steps. Do we really have to wait another day?

Update (2017-11-14, 14:19 UTC): Well, UPS must have read this – or the original delivery report was bogus. Anyway, not an hour after the last entry the delivery status changed to “delivered”, and there the thing was in our mailbox.

Except – it wasn’t the controller in the first place. It turned out that, in fact, four disks had failed at the same time. It’s hard to believe but that’s what it is. Seems we’ll have to step carefully until the disks are replaced. We’ll run a thorough check tonight while we prepare the database tables.

Unless more disaster strikes, we should be back by tomorrow morning CET – but without the big tables, and I’m not sure yet whether I dare putting them in on these flimsy, enterprise-class, 15k, SAS disks. Well, I give you they’ve run for five years now.

Update (2017-11-15, 14:37 UTC): After a bit more consideration, I figured I wouldn’t trust the aging enterprise disks any more. Our admins then gave me a virtual machine on one of their boxes that should be powerful enough to keep the data center afloat for a while. So, the data center is back up at 90% (counting by the number of regression tests still failing) since an hour ago or so.

Again, the big tables are missing (and a few obscure services the RDs of which showed bitrot and need polishing); they should come in over the next days, one by one; provided the VM isn’t much slower than our DB server, you should see about two of them come in per day, with my planned sequence being hsoy, ppmxl, gps1, gaia, 2mass, sdssdr7, urat1, wise, ucac5, ucac4, rosat, ucac3, mwsc, mwsc-e14a, usnob, supercosmos.

Feel free to vote tables up if you severely miss a table.

And all this assumes no further disaster strikes…

Update (2017-11-16, 9:22 UTC): Well, it ain’t pretty. The first large catalog, HSOY, is finally in, and the CLUSTER operation ((which dominates restore time) took almost 12 hours; and HSOY, at 0.5 Gigarecord, isn’t all that large. So, our replacement machine really is a good deal slower than our normal database server that did that operation in less than three hours. I guess you’ll want to do your large-table queries on a different service for the next couple of weeks. Use the Registry!

Update (2017-11-20, 9:05 UTC): With a bit more RAM (DaCHS operators: version 1.1 will have a new configuration item for indexing work memory!), things have been going faster over the weekend. We’re now down to 15 regression tests failing (of 330), with just 4 large catalogs missing still, and then a few nitty-gritty, almost invisible tables still needing some manual work.

Update (2017-11-23, 14:51 UTC): Only 10 regression tests are still failing, but progress has become slow again – the machine has been clustering supercosmos.data for the last 36 hours now; it’s not that huge a table, so it’s a bit hard to understand why this table is holding up things so much. On the plus side, new SSDs for our database server are being shipped, so we should see faster operation soon.

Update (2017-12-01, 13:05 UTC): We’ve just switched back the database server back to our own server with its fresh SSDs. A few esoteric big tables are yet missing, but we’d say the crisis is over. Hence, that’s the last update. Thank you for your attention.

A Tale of CLUSTER and Failure

[Screenshot: aptitude purge '~c']
This command nuked 5 TB of database tables (with a bit of folly before).

Whenever you read “backup”, the phrase “lessons learned” is usually not far off. And so it is here, with a little story for DaCHS operators (food for thought, I’d say), astronomers (knowing what’s going on behind the curtain sometimes helps write better queries), and everyone else (for amusement and a generous helping of schadenfreude).

It all started yesterday when I upgraded the main database server of our data center (most anything in the VO with a org.gavo.dc in the IVOID depends on it) to Debian stretch. When that was done, I decided that with about 1000 installed packages, too much cruft had accumulated and started happily removing unused software. Until I accidentally removed the postgres package. In itself, that would not have been so disastrous – we’re running Debian, which means packages usually keep the configuration and, in particular, the data around even if you remove them. The postgres packages, at the very least, do, and so does DaCHS.

Unless, that is, you purge the postgres package before you notice you’ve
removed it. I, for one, found it appropriate to purge all packages deleted but not purged right after my package deletion spree. Oh bother. Can you imagine my horror when the beastly machine said “dropping cluster main”? And ignored my panic-induced ^C (which, of course, was the right thing to do; the database was toast already anyway).

There I had just flushed 5 Terabytes of highly structured data down the drain.

Well, go restore from backup, you say? As usual with backups, it’s not that simple™. You see, backing up databases is tricky. One can of course just back up the files as they are and then try to restore from them. However, while the database is running, it is continually modifying what’s on the disk, so such a backup will be an inconsistent, unusable mess. Even if one had a file system that can do snapshots, a running server has in-memory state that is typically needed to make heads and tails of the disk image.

So, to back up a database, there are essentially variations of two themes, roughly:

  • ask the database to dump itself. The result is a conventional file that essentially is a recipe for how to re-create a particular state of the database.
  • have a “hot spare”. That’s another machine with a database server running. In one way or another that other box snoops on what the main machine is doing and just replicates the actions it sees. The net effect is that you have an immediately usable copy of your database server.

Anyway, after the opening of this article you’ll not be surprised to learn that we did neither. The hot spare scenario needs a machine powerful enough to usefully serve as a stand-in and to not slow down the main machine when we feed data by the Gigarecords. Running such a machine just for backup would be a major waste of electricity – after all, this is the first time in about 10 years that it would really have been needed, and such a box slurps juice like it’s… well, juice.

As to maintaining a dump: Well, for the big catalogs, we use DaCHS’ direct grammars [PSA: don’t follow this link unless you’re running DaCHS]. These are, except perhaps for a small factor, just as fast as a restore from a dump. And the indices (i.e., data structures that tell the computer where to look for objects with a certain position or magnitude rather than having to go through the whole table) need to be re-made when restoring from dumps, too, so we’d be pushing around files of several terabyte for almost no benefit.

Except. Except I could have known better, because during catalog ingestions the most time-consuming task usually is the CLUSTER operation. That’s when the machine re-organises the data on disk so it matches expected access patterns – for astronomical data, that’s usually by spatial location. Having a large table clustered makes an astonishing difference, in particular when you’re still using spinning disks (as we are). So, there’s really no way around it.

But it takes time. And more time. And that time is saved when restoring from a dump, because the dump (hopefully) largely preserves the on-disk organisation, and so the CLUSTER is almost a no-op.

Well, the bottom line is: on our Heidelberg data center, the big tables are only coming back slowly; as I write this, from the gigarecord league PPMXL and GPS1 are back, with SDSS DR7 and HSOY expected later today. But it’ll probably take until late next week until all the big tables are back in and properly indexed and clustered.

Apologies for any inconvenience. On the other hand, as measured by our regression tests (DaCHS operators: required reading!) 90% of our stuff is fine again, so we could fare worse given we just had a database disaster of magnitude 5 on the Terabyte scale.

Which begs the question: Was it better this way? At least many important services are safely back up, and that might very well not be the case were we running the restore from an actual dump. Hm.

Register your stuff with purx!

TOPCAT screenshot
If you open the TAP dialog of TOPCAT, what you see is Registry content.

The VO Registry lets people find astronomical resources (which is jargon for “dataset, service, or stuff“). Currently, most of its users don’t even notice they’re using the Registry, as when TOPCAT just magically lists what TAP services are available (image above) – but there are also interfaces that let you directly interact with the registry, for instance GAVO’s WIRR service or ESAVO’s Registry Search.

Arguably, the usefulness of the Registry scales with its completeness. With sufficient completeness, the domain-specific, structured metadata will also make it interesting for generic discovery of astronomical data; in a quip, looking for UCDs in google will never work quite well – and without that, it’s hard to find things with queries like „radio fluxes of early-type stars”.

Either way: If you have a data set or a service dealing with astronomy, it’d be great if you could register it. To do this, so far you either had to set up a publishing registry, which is nontrivial even if you have a software that natively speaks a protocol called OAI-PMH (DaCHS does, but most other publishing suites don’t) or you could use one of two web interfaces to define your resource (notes for a talk on this I gave in 2016).

Neither of these options is really attractive if you publish only a few resources (so the overhead of running a publishing registry looks excessive) that change now and then (so using a web browser to update the resource records again and again is tedious). Therefore, GAVO has developed purx, the publishing registry proxy. We’ve officially announced it during the recent Southern Spring Interop in Santiago de Chile (Program), and the lecture notes for that talk are probably a good introduction to what this is about.

If you’re running VO services and have not registered them so far, you probably want to read both these notes and the service documentation. If, on the other hand, you just have a web-published directory of files or a browser-based service, you probably can skip even that. Just grab a sample record (use the one for a simple browser service in both cases) and adapt it to what’s fitting for your website. Then put the resulting file online somewhere and paste the URL of that location on purx’ enrollment service. In case you’re uncertain about some of the terms in the record, perhaps our crib sheet for metadata we ask our data providers for will be helpful.

There’s really no excuse any more for not being in the Registry!

GAVO at AG-Tagung 2017, Göttingen

[Image: Booth 2017]

For the 11th time, GAVO has a booth at a meeting of the venerable Astronomische Gesellschaft (AG). This year, we are in Göttingen, again offering advice to users and data providers at our booth (if you’re looking for us: We’re close to the entrance of Hörsaal 5).

And again we have a Puzzler, a little problem easily solved if you know your VO tech – and if you don’t we’ll gladly help you at our booth. We are also giving hints there, one being released at each coffee break on Tuesday and Wednesday (there are little posters with them, too, if you miss one). Of course, if you’re not in Göttingen, you’re still welcome to try your hand. You won’t get to win our great first prize then, the big Crab Nebula towel (it should be easy to spot on the image above).

If, on the other hand, you are in Göttingen, be sure to drop by our splinter meeting. Yours truly, for instance, will speak about EPN-TAP (remember And the Solar System, too right here? That’s what this is about).

Update 2017-09-20, 17:00 We’ve just given out the last hint for the puzzler, and so we can publish them all over on the puzzler archive: Hints for the 2017 puzzler. If you’re in Göttingen, you still have until tomorrow 16:00 to hand in a solution and perhaps win our nice and fuzzy Crab Nebula towel.

Update 2017-09-21, 17:00 And the winner is… again not from Marburg, which is beginning to become a running gag, and they’ve been unlucky for the last three years in a row. Anyway, here’s our proposed solution.

The Earth is Our Telescope

Antares 2007-2012 neutrino coverage
The coverage of the 2007-2012 Antares neutrinos, with positional uncertainties scaled by three.

At our Heidelberg data center, we have have already published some neutrino data, for instance the Amanda-II neutrino candiates, the IceCube-40 neutrino candidates, and the 2007-2010 Antares results.

That latter project has now given us updated data, for the first time including timestamps, available as the Antares service.

Now, if you look at the coverage (above), you’ll notice at least two things: For one, there’s no data around the north pole. That’s because the instrument sits beyond the waters of the Mediterranean sea, not far from where some of you may now enjoy your vacation. And it is using the Earth as its filter – it’s measuring particles as they come ”up” and discards anything that goes “down”. Yes, neutrinos are strange beasts.

The second somewhat unusual thing is that the positional uncertainties are huge compared to what we’re used to from optical catalogs: a degree is not uncommon (we’ve scaled the error circles by a factor of 3 in the image above, though). And that requires some extra care when working with the data.

In our table, we have a column origin_est that actually contains circles. Hence, to find images of the “strongest” neutrinos in our obscore table, you could write:

SELECT * FROM
ivoa.obscore AS o
JOIN (
  SELECT top 10 * FROM antares.data
  ORDER BY n_hits desc
) AS n
ON 1=INTERSECTS(
  s_region,
  origin_est)

in a query to our TAP service.

But of course, this only gets really exciting when you can hope that perhaps that neutrino was emitted by some violent event that may have been observed serendipitously by someone else. That query then is (and we’re using all the neutrinos now):

SELECT * FROM
ivoa.obscore AS o
JOIN antares.data as n
ON  
   epoch_mjd between t_min-0.01 and t_max+0.01
  AND
    dataproduct_type='image'
  AND
    1=INTERSECTS(origin_est, s_region)

On our data center, this doesn’t yield anything at the moment (it does, though, if you do away with the spatial constraint, which frankly suprised me a bit). But then if you went and ran this query against obscore services of active observatories? And perhaps had your computer try and figure out whether anything unusual is seen on whatever you find?

We think that would be really nifty, and right after we’ve published a first version of our little pyVO course (which is a bit on the back burner, but watch this space), we’ll probably work that out as a proper pyVO use case.

And meanwhile: In case you’ll be standing on the shores of the Mediterranean this summer, enjoy the view and think of the monster deep down in there waiting for neutrinos to detect – and eventually drop into our data center.