Heidelberg Data Center Down^WUp again

Continuous updates at the end of this article

Well, it has happened – perhaps it was the strain of restoring a couple of terabyte of data (as reported yesterday), perhaps it’s uncorrelated, but our main database server’s RAID threw errors and then disappeared from the SCSI bus today at about 15:03 UTC.

This means that all services from http://dc.g-vo.org are broken for the moment. We’re sorry, and we will try to at least limp on as fast as possible.

Update (2017-11-13, 14:30 UTC): Well, it’s official. What’s broken is the lousy Adaptec controller – whatever configuration we tried, it can’t talk to its backplane any more. Worse, we don’t have a spare part for that piece here. We’re trying to get one as quickly as possible, but even medium-sized shops don’t have multi-channel SAS controllers in stock, so it’ll have to be express mail.

Of course, the results of the weekend’s restore are lost; so, we’ll need about 24 hours of restore again to get up to 90% of the services after the box is back up, with large tables being restored after that. Again, we’re unhappy about the long downtime, but it could only have been averted by having a hot spare, which for this kind of infrastructure just wouldn’t have been justifiable over the last ten years.

Another lesson learned: Hardware RAID sucks. It was really hard to analyse the failure, and the messages of the controller BIOS were completely unhelpful. We, at least, will migrate to JBOD (one of the cool IT acronyms with a laid-back expansion: Just a Bunch Of Disks) and software RAID.

And you know what? At least the box had two power supplies. If these weren’t redundant, you bet the power supply would have failed.

To give you an idea how bad things are, here is the open server with the controller card that probably caused the mayhem (left), and 12 TB of fast disk, yearning for action (right).

[A database server in pieces]

Update (2017-11-14, 12:21 UTC): We’re cursed. The UPS guys with the new controller were in the main institute building. They claimed they couldn’t find anyone. Ok, our janitor is on sick leave, and it was lunch break, but still. It can’t be that hard to see walk up a single flight of steps. Do we really have to wait another day?

Update (2017-11-14, 14:19 UTC): Well, UPS must have read this – or the original delivery report was bogus. Anyway, not an hour after the last entry the delivery status changed to “delivered”, and there the thing was in our mailbox.

Except – it wasn’t the controller in the first place. It turned out that, in fact, four disks had failed at the same time. It’s hard to believe but that’s what it is. Seems we’ll have to step carefully until the disks are replaced. We’ll run a thorough check tonight while we prepare the database tables.

Unless more disaster strikes, we should be back by tomorrow morning CET – but without the big tables, and I’m not sure yet whether I dare putting them in on these flimsy, enterprise-class, 15k, SAS disks. Well, I give you they’ve run for five years now.

Update (2017-11-15, 14:37 UTC): After a bit more consideration, I figured I wouldn’t trust the aging enterprise disks any more. Our admins then gave me a virtual machine on one of their boxes that should be powerful enough to keep the data center afloat for a while. So, the data center is back up at 90% (counting by the number of regression tests still failing) since an hour ago or so.

Again, the big tables are missing (and a few obscure services the RDs of which showed bitrot and need polishing); they should come in over the next days, one by one; provided the VM isn’t much slower than our DB server, you should see about two of them come in per day, with my planned sequence being hsoy, ppmxl, gps1, gaia, 2mass, sdssdr7, urat1, wise, ucac5, ucac4, rosat, ucac3, mwsc, mwsc-e14a, usnob, supercosmos.

Feel free to vote tables up if you severely miss a table.

And all this assumes no further disaster strikes…

Update (2017-11-16, 9:22 UTC): Well, it ain’t pretty. The first large catalog, HSOY, is finally in, and the CLUSTER operation ((which dominates restore time) took almost 12 hours; and HSOY, at 0.5 Gigarecord, isn’t all that large. So, our replacement machine really is a good deal slower than our normal database server that did that operation in less than three hours. I guess you’ll want to do your large-table queries on a different service for the next couple of weeks. Use the Registry!

Update (2017-11-20, 9:05 UTC): With a bit more RAM (DaCHS operators: version 1.1 will have a new configuration item for indexing work memory!), things have been going faster over the weekend. We’re now down to 15 regression tests failing (of 330), with just 4 large catalogs missing still, and then a few nitty-gritty, almost invisible tables still needing some manual work.

Update (2017-11-23, 14:51 UTC): Only 10 regressions tests are still failing, but progress has become slow again – the machine has been clustering supercosmos.data for the last 36 hours now; it’s not that huge a table, so it’s a bit hard to understand why this table is holding up things so much. On the plus side, new SSDs for our database server are being shipped, so we should see faster operation soon.

A Tale of CLUSTER and Failure

[Screenshot: aptitude purge '~c']
This command nuked 5 TB of database tables (with a bit of folly before).

Whenever you read “backup”, the phrase “lessons learned” is usually not far off. And so it is here, with a little story for DaCHS operators (food for thought, I’d say), astronomers (knowing what’s going on behind the curtain sometimes helps write better queries), and everyone else (for amusement and a generous helping of schadenfreude).

It all started yesterday when I upgraded the main database server of our data center (most anything in the VO with a org.gavo.dc in the IVOID depends on it) to Debian stretch. When that was done, I decided that with about 1000 installed packages, too much cruft had accumulated and started happily removing unused software. Until I accidentally removed the postgres package. In itself, that would not have been so disastrous – we’re running Debian, which means packages usually keep the configuration and, in particular, the data around even if you remove them. The postgres packages, at the very least, do, and so does DaCHS.

Unless, that is, you purge the postgres package before you notice you’ve
removed it. I, for one, found it appropriate to purge all packages deleted but not purged right after my package deletion spree. Oh bother. Can you imagine my horror when the beastly machine said “dropping cluster main”? And ignored my panic-induced ^C (which, of course, was the right thing to do; the database was toast already anyway).

There I had just flushed 5 Terabytes of highly structured data down the drain.

Well, go restore from backup, you say? As usual with backups, it’s not that simple™. You see, backing up databases is tricky. One can of course just back up the files as they are and then try to restore from them. However, while the database is running, it is continually modifying what’s on the disk, so such a backup will be an inconsistent, unusable mess. Even if one had a file system that can do snapshots, a running server has in-memory state that is typically needed to make heads and tails of the disk image.

So, to back up a database, there are essentially variations of two themes, roughly:

  • ask the database to dump itself. The result is a conventional file that essentially is a recipe for how to re-create a particular state of the database.
  • have a “hot spare”. That’s another machine with a database server running. In one way or another that other box snoops on what the main machine is doing and just replicates the actions it sees. The net effect is that you have an immediately usable copy of your database server.

Anyway, after the opening of this article you’ll not be surprised to learn that we did neither. The hot spare scenario needs a machine powerful enough to usefully serve as a stand-in and to not slow down the main machine when we feed data by the Gigarecords. Running such a machine just for backup would be a major waste of electricity – after all, this is the first time in about 10 years that it would really have been needed, and such a box slurps juice like it’s… well, juice.

As to maintaining a dump: Well, for the big catalogs, we use DaCHS’ direct grammars [PSA: don’t follow this link unless you’re running DaCHS]. These are, except perhaps for a small factor, just as fast as a restore from a dump. And the indices (i.e., data structures that tell the computer where to look for objects with a certain position or magnitude rather than having to go through the whole table) need to be re-made when restoring from dumps, too, so we’d be pushing around files of several terabyte for almost no benefit.

Except. Except I could have known better, because during catalog ingestions the most time-consuming task usually is the CLUSTER operation. That’s when the machine re-organises the data on disk so it matches expected access patterns – for astronomical data, that’s usually by spatial location. Having a large table clustered makes an astonishing difference, in particular when you’re still using spinning disks (as we are). So, there’s really no way around it.

But it takes time. And more time. And that time is saved when restoring from a dump, because the dump (hopefully) largely preserves the on-disk organisation, and so the CLUSTER is almost a no-op.

Well, the bottom line is: on our Heidelberg data center, the big tables are only coming back slowly; as I write this, from the gigarecord league PPMXL and GPS1 are back, with SDSS DR7 and HSOY expected later today. But it’ll probably take until late next week until all the big tables are back in and properly indexed and clustered.

Apologies for any inconvenience. On the other hand, as measured by our regression tests (DaCHS operators: required reading!) 90% of our stuff is fine again, so we could fare worse given we just had a database disaster of magnitude 5 on the Terabyte scale.

Which begs the question: Was it better this way? At least many
important services are safely back up, and that might very well not be
the case were we running the restore from an actual dump. Hm.

Register your stuff with purx!

TOPCAT screenshot
If you open the TAP dialog of TOPCAT, what you see is Registry content.

The VO Registry lets people find astronomical resources (which is jargon for “dataset, service, or stuff“). Currently, most of its users don’t even notice they’re using the Registry, as when TOPCAT just magically lists what TAP services are available (image above) – but there are also interfaces that let you directly interact with the registry, for instance GAVO’s WIRR service or ESAVO’s Registry Search.

Arguably, the usefulness of the Registry scales with its completeness. With sufficient completeness, the domain-specific, structured metadata will also make it interesting for generic discovery of astronomical data; in a quip, looking for UCDs in google will never work quite well – and without that, it’s hard to find things with queries like „radio fluxes of early-type stars”.

Either way: If you have a data set or a service dealing with astronomy, it’d be great if you could register it. To do this, so far you either had to set up a publishing registry, which is nontrivial even if you have a software that natively speaks a protocol called OAI-PMH (DaCHS does, but most other publishing suites don’t) or you could use one of two web interfaces to define your resource (notes for a talk on this I gave in 2016).

Neither of these options is really attractive if you publish only a few resources (so the overhead of running a publishing registry looks excessive) that change now and then (so using a web browser to update the resource records again and again is tedious). Therefore, GAVO has developed purx, the publishing registry proxy. We’ve officially announced it during the recent Southern Spring Interop in Santiago de Chile (Program), and the lecture notes for that talk are probably a good introduction to what this is about.

If you’re running VO services and have not registered them so far, you probably want to read both these notes and the service documentation. If, on the other hand, you just have a web-published directory of files or a browser-based service, you probably can skip even that. Just grab a sample record (use the one for a simple browser service in both cases) and adapt it to what’s fitting for your website. Then put the resulting file online somewhere and paste the URL of that location on purx’ enrollment service. In case you’re uncertain about some of the terms in the record, perhaps our crib sheet for metadata we ask our data providers for will be helpful.

There’s really no excuse any more for not being in the Registry!

GAVO at AG-Tagung 2017, Göttingen

[Image: Booth 2017]

For the 11th time, GAVO has a booth at a meeting of the venerable Astronomische Gesellschaft (AG). This year, we are in Göttingen, again offering advice to users and data providers at our booth (if you’re looking for us: We’re close to the entrance of Hörsaal 5).

And again we have a Puzzler, a little problem easily solved if you know your VO tech – and if you don’t we’ll gladly help you at our booth. We are also giving hints there, one being released at each coffee break on Tuesday and Wednesday (there are little posters with them, too, if you miss one). Of course, if you’re not in Göttingen, you’re still welcome to try your hand. You won’t get to win our great first prize then, the big Crab Nebula towel (it should be easy to spot on the image above).

If, on the other hand, you are in Göttingen, be sure to drop by our splinter meeting. Yours truly, for instance, will speak about EPN-TAP (remember And the Solar System, too right here? That’s what this is about).

Update 2017-09-20, 17:00 We’ve just given out the last hint for the puzzler, and so we can publish them all over on the puzzler archive: Hints for the 2017 puzzler. If you’re in Göttingen, you still have until tomorrow 16:00 to hand in a solution and perhaps win our nice and fuzzy Crab Nebula towel.

Update 2017-09-21, 17:00 And the winner is… again not from Marburg, which is beginning to become a running gag, and they’ve been unlucky for the last three years in a row. Anyway, here’s our proposed solution.

The Earth is Our Telescope

Antares 2007-2012 neutrino coverage
The coverage of the 2007-2012 Antares neutrinos, with positional uncertainties scaled by three.

At our Heidelberg data center, we have have already published some neutrino data, for instance the Amanda-II neutrino candiates, the IceCube-40 neutrino candidates, and the 2007-2010 Antares results.

That latter project has now given us updated data, for the first time including timestamps, available as the Antares service.

Now, if you look at the coverage (above), you’ll notice at least two things: For one, there’s no data around the north pole. That’s because the instrument sits beyond the waters of the Mediterranean sea, not far from where some of you may now enjoy your vacation. And it is using the Earth as its filter – it’s measuring particles as they come ”up” and discards anything that goes “down”. Yes, neutrinos are strange beasts.

The second somewhat unusual thing is that the positional uncertainties are huge compared to what we’re used to from optical catalogs: a degree is not uncommon (we’ve scaled the error circles by a factor of 3 in the image above, though). And that requires some extra care when working with the data.

In our table, we have a column origin_est that actually contains circles. Hence, to find images of the “strongest” neutrinos in our obscore table, you could write:

SELECT * FROM
ivoa.obscore AS o
JOIN (
  SELECT top 10 * FROM antares.data
  ORDER BY n_hits desc
) AS n
ON 1=INTERSECTS(
  s_region,
  origin_est)

in a query to our TAP service.

But of course, this only gets really exciting when you can hope that perhaps that neutrino was emitted by some violent event that may have been observed serendipitously by someone else. That query then is (and we’re using all the neutrinos now):

SELECT * FROM
ivoa.obscore AS o
JOIN antares.data as n
ON  
   epoch_mjd between t_min-0.01 and t_max+0.01
  AND
    dataproduct_type='image'
  AND
    1=INTERSECTS(origin_est, s_region)

On our data center, this doesn’t yield anything at the moment (it does, though, if you do away with the spatial constraint, which frankly suprised me a bit). But then if you went and ran this query against obscore services of active observatories? And perhaps had your computer try and figure out whether anything unusual is seen on whatever you find?

We think that would be really nifty, and right after we’ve published a first version of our little pyVO course (which is a bit on the back burner, but watch this space), we’ll probably work that out as a proper pyVO use case.

And meanwhile: In case you’ll be standing on the shores of the Mediterranean this summer, enjoy the view and think of the monster deep down in there waiting for neutrinos to detect – and eventually drop into our data center.

DaCHS 1.0 released

Today, I have released DaCHS 1.0 – after long years in the 0.9 range, it was finally time to do so. The jump in the major version number was an opportunity to remove some cruft that had accumulated over the years; this, on the other hand, means that if you’re running DaCHS, you should watch the upgrade and see if anything broke later (this might be the perfect time to add regression tests to your RDs).

The changelog is below, but before that a bold-faced warning:

Install python-astropy before upgrading

This is because DaCHS now depends on astropy rather than pyfits and pywcs. The latter is no longer part of Debian stretch, and so we made the jump to astropy (that would have been due during Debian stretch’s lifetime anyway) even before 1.0.

Now, Debian holds back packages with new dependencies, and due to the way DaCHS’ modules are distributed, DaCHS will break when some of its packages are held back. The symptom is error messages like “pkg_resources.DistributionNotFound: gavodachs==0.9.8”. If you already see those, a apt-get dist-upgrade should get you in business again.

With this out of the way, here is an annotated log of the major changes:

  • DaCHS’ main entry point is now actually called dachs (i.e., call dachs imp q and such in the future). gavo will work as an alias for quite a while to come, though, and it’s still used a lot in the documentation (you’re welcome to fix this: the docs are maintained on github).
  • Hopefully more useful manpage (of course, also available with man dachs) – have a peek!
  • UWS support is now at version 1.1 (i.e., there’s creationDate in jobs, filters in the joblist, and slow polling).
  • Added “declarative” licenses. Please read the Licensing chapter in the tutorial and slap licenses on your data.
  • Now using astropy.wcs instead of pywcs, and astropy.io.fits instead of pyfits. The respective APIs have, unfortunately, changed quite a bit. If you’re using them (e.g., in processors), you’ll have to change your code; it’s unlikely services are impacted at runtime. (see also How do I update my code?).
  • Removed the //epntap#table-2_0mixin. Use
    //epntap2#table-2_0 instead (sorry).
  • Removed sdmCore (use Datalink/SODA instead); the SODA procs in //datalink are also gone, use the ones from //soda instead (sorry, SODA development has been difficult on the IVOA level).
  • Removed imp -u flag and the corresponding updateMode parse option. If you used that or the uploadCore, just mark the DDs involved with updating="True" instead.
  • Massive sanitation of input parameter processing. If you’ve been using inputTable, inputDD, or have been doing creative things with inputKeys, please check the respective services carefully after upgrading. See also DaCHS’ Service Interface in the reference documentation. The most user-visible change in this department is if you’ve been using repeated parameters to fill array-valued inputs. That’s no longer allowed; if you actually must have this kind of thing, you’ll need a custom core and must fill the arrays by hand.
  • In DaCHS’ SQL interface, tuples now are matched to records and lists to arrays (it was the other way round before). If while importing you manually created tuples to fill to array-like columns, you’ll have to make lists from these now.
  • rsc.makeData or rsc.TableForDef no longer automatically make connections when used on database tables. You must give them explicit connection arguments now (with base.getTableConn() as conn:).
  • logo_tiny.png and logo_big.png are now ignored by DaCHS, all logos spit out by it are now based on logo_medium.png, including, if not overridden, the favicon (that you will now get if you have not set it before).
  • Removed (probably largely unused) features editCore, SDM2 support, pkg_resource overrides, simpleView, computedCore.
  • Removed the argparse module shipped with DaCHS. This breaks compatibility with python 2.6 (although you can still run DaCHS with a manually installed argparse.py in 2.6).

Even though that’s quite a mouthful, I expect few people will actually experience breaking services. If you do, by all means let us know on the DaCHS-support mailing list.

As usual, the general upgrading instructions are available in the operator’s guide; if you plan on upgrading to stretch soon, also have a look at hints on postgres upgrades. Stretch comes with postgres 9.6 (jessie: 9.4), and you should migrate sooner or later anyway.

Users not using Debian’s package management can, as usual, grab tarballs from http://soft.g-vo.org/dachs.

ADQL tricks at MPIA

Aerial image of Heidelberg and Königstuhl
The 2017-06-29 ADQL talk (red circle) from 30000 ft

Today I was up on Heidelberg’s signature mountain, Königstuhl, at the Max-Planck-Institute for Astronomy for a little talk on what I’d provisionally call “intermediate ADQL” – discussing some aspects of ADQL and some TAP techniques that may not be immediately obvious but still generally and straightforwardly applicable to everyday problems. Since I suspect the lecture notes for that talk may be of interest to some readers of this blog, I thought I should share them here.

What this also contains is a very quick piece of pyVO-based python (which needs both this helper and a recent pyVO) for a use case that comes up fairly often: “Give me all proper motions (radio fluxes, distances, radial velocities, whatever) for object in this region.”

This uses a discovery case I’ve been after for quite a while now: Find services by the UCDs of tables within them. And while that’s been possible for quite a while on GAVO’s Registry UI WIRR, there’s still too many services that don’t declare their tables to the Registry, and when talking about TAP, the situation is still a bit worse (as has been mentioned in my account of the last interop). So – enjoy the code, but very frankly, you’ll still see wires sticking out for a several months yet.

And if you run a TAP service yourself, please have a look at how to enable table discovery over on the IVOA wiki so we can finally get those pesky wires out of our users’ eyes.

GAVO at the Northern Spring Interop

[IVOA 2002-2017]
15 Years of IVOA: The birthday cake our Shanghai hosts prepared for us.

Every half year, VO enthusiasts from all over the world gather for an “Interoperability conference”, or Interop for short. The latest such event, the Shanghai Interop 2017, ended Friday a week ago. It has been a “long” one again after the short southern spring Interop in Trieste last year (featured in this blog).

As usual, it was a week of many discussions and much consensus-building. In this post, I’d like to mention a few of the GAVO-related contributions; links typcially go to slides or lecture notes PDFs.

On the Registry side of things, we’re currently (among many other things) briding the gap between DOIs and the Registry in VOResource 1.1, and we invited registry providers to take up the new features, as well as proposing how to update RegTAP (which is used to actually query the Registry) to cope with the new metadata.

Also in Registry, our efforts of almost a decade to properly support registering tables and similar data collections bore fruit (Britain’s Mark Taylor reported on his experiences taking up our current proposal), and the fairly spectacular new Aladin V10 (presented by the CDS’ Pierre Fernique, who showed off what I’m tempted to call a “visual registry interface”) urgently needs what we’ve developed over the years.

We furthermore reported on new steps to finally let people search the registry using Space-Time constraints (spoiler: the tech is almost there, registry records need lots of work).

Spatial searches in the registries are one thing enabled by storing and searching for MOCs in relational databases, as was reported by Markus Nullmeier over in an Applications session. The setting may already tell you that these MOCs (Multi Order Coverages, a healpix-based way of representing fairly arbitrary areas on the sky) have applications far beyond Registry.

Also in Apps, Ole reported on progress in packaging VO applications for easy and reliable installation, in this case for Debian and derivatives. Finally for Apps, Margarida reported on getting lines and line lists into the spectral analysis package SPLAT: Implementation of SLAP and VAMDC interfaces in SPLAT-VO.

In the wider area of data access protocols and underlying data models, we contributed to Marco’s talk on the long-overdue facelifting for the VO’s bedrock, Simple Cone Search (Keeping SCS up-to-date within DAL landscape) – the fact that there’s an installed base of 15000 of such services may let you guess that we need to tread lightly here. On the bleeding-edge side of things, we presented our current ideas on how, eventually, several data models, data modelling as such and the annotation of data according to these data models might play together in publishing time domain data with DACHS (previously featured on this blog in a slightly less technical way).

We also talked about education and outreach. Hendrik reported on our ADQL course and how it helps future astronomers learn dealing efficiently with even very large datasets. Hendrik’s long-lasting dedication to these topics did not go unpunished at this interop: since the Exec meeting on the Interop Wednesday he is vice-chairing the education interest group of the IVOA. Back in the session I also mused a bit about what metadata changes are needed to make the VO tutorial collection VOTT more useful.

It is a particular pleasure for me to mention that the IVOA has a new interest group: “Solar System”. Regular readers of the blog will have noticed that I have a particularly soft spot in my heart for that crowd, and so I gave a short overview over how DaCHS is used among them, too.

And that’s just the official programme. Much more fixing, designing, and discussion went on between sessions or in the evenings. The latter, of course, included some decidedly less technical aspects. Including, as pictured above, a nice birthday cake for the IVOA, as it is now 15 year since the first Interop meeting in January 2002.

Time Series

The IOVA’s committee on science priorities (CSP) has declared the “time domain” as one of its focus topics quite a while ago, an action boiling down to a call to the IVOA member projects to think about support for time series and their analysis in services, standards, and clients.

While for several years, response has been lackluster, work on time series has gathered quite a bit of steam recently. For instance, the spectral client SPLAT (co-maintained by GAVO) has grown some preliminary support to properly display time series (very rudimentary in what’s currently released), and lively discussions on proper metadata for time series have been going on on the Data Models mailing list of the IVOA – if you’re interested in the time domain, this would be a good time to subscribe for a while and comment as appropriate.

Meanwhile, in our Heidelberg data center, we’ve joined the fray by publishing our first time series service (science background: searching for exoplanets in the Milky Way bulge using gravitational lensing), which is available through SSA (look for k2c9vst) and through ObsCore (at http://dc.g-vo.org/tap, collection name k2c9vst), too. For details see also the service info.

Since right now future standards are being worked out, this is a perfect time to publish your time series; this way you get to influence what people will be able to tell machines about their time series in the next couple of years. Ask our staff (contact below) if you want us to publish for you. But you can also self-publish using the DaCHS publication package. Refer to the resource descriptor of the k2c9vst service to get started.

At its heart is the table definition of the time series, which is basically


<table id="instance">
  <column name="hjd" type="double precision"
      unit="d" ucd="time.epoch"
      tablehead="Time"
      description="Time this photometry corresponds to."
      verbLevel="1"/>
  <column name="df" type="double precision"
      unit="adu" ucd="phot.flux"
      tablehead="Diff. Flux"
      description="Difference as defined by 2008MNRAS.386L..77B"
      verbLevel="1"/>
  <column name="e_df"
      unit="adu" ucd="stat.error;phot.flux"
      tablehead="Err. DF"
      description="Error in difference flux."
      verbLevel="15"/>
</table>

– in the actual service, there are a few more columns, but time, value, and error actually make up a full time series.

Except that a machine can’t really tell what this is yet (well, perhaps it could using UCDs, but that’s a different matter). What it needs to work out is what’s the independent axis, what the frames are, etc. And to do that, the machine needs annotation, i.e., machine-readable, structured declarations alongside the data and the “classic” metadata like units and descriptions.

In actual VOTables, this will be happening through VO-DML annotation, which is also still seriously being discussed; whatever we currently spit out you can inspect in the XML source of this example document.

DaCHS, however, isolates you from the concrete details of writing VOTables. Instead, you write annotations in a JSON-inspired little language we’ve christened SIL (“Simple Instance Language”; reference). The complicated part is to know what types and attributes you have to declare, which is exactly what the data models is a bout. As said initially, the details are still in flux here, but this is what things look like right now:


<dm>
  (ivoa:Measurement) {
    value: @df
    statError: @e_df
  }
</dm>

<dm>
  (stc2:Coords) {
    time: (stc2:Coord) {
      frame:
        (stc2:TimeFrame) {
          timescale: UTC
          refPosition: BARYCENTER 
          kind: JD }
      loc: @hjd
    }
    space: 
      (stc2:Coord) {
        frame:
          (stc2:SpaceFrame) {
            orientation: ICRS
            epoch: "J2000.0"
          }
        loc: [@raj2000 @dej2000]
    }
  }
</dm>

<dm>
  (ndcube:Cube) {
    independent_axes: [@hjd]
    dependent_axes: [@df @mag]
  }
</dm>

If you consider this for a moment, you’ll see that each dm element corresponds to something like an object template of a certain “type”. The first, for instance, defines a measurement with a value and a statistical error. Both happen to be given as references to columns in the table defined above (as indicated by the @ signs).

The last annotation defines a data cube; a time series in this definition is simply a data cube with just a single non-degenerate independently varying axis (the independent_axis attribute; in the value the square brackets indicate a sequence) that happens to be time-like. And that hjd is time-like, VO-DML enabled clients will work out when interpreting the STC (“Space-Time-Coordinates”) annotation. In there, you will see that hjd is referenced from the time attribute and with a time-like frame that also defines that this particular flavor of HJD is what a hypothetical clock at the solar system’s barycenter would measure if it stood in the gravitational potential in Greenwhich, and had leap seconds thrown in now and then. And that long story is communicated through “literals”, constant strings like “BARYCENTER” or ”TT”, which are also legal within DaCHS data model annotations.

This may seem a bit complicated at first. I argue, though, that given what time series clients will have to do anyway, going through the cube and STC annotations is actually about the most straightforward thing you can do.

But perhaps I’m wrong, so again: None of this is cast in stone right now. Comments are even more welcome than usual, either below or at gavo@ari.uni-heidelberg.de.

Asterics Tech Forum

The 3. Asterics DADI Tech Forum took place last week in Strasbourg – and many GAVO members made contributions as well.
This time, there were 3 slots for hackathon sessions, which were also used for discussions. We’ll mention two highlights of our contributions here.

We took the opportunity to push our Provenance Data Model efforts and used the hackathon slots for provenance discussions.
One topic was the links between the simulation data model and ProvenanceDM, and how to map from SimDM to ProvenanceDM classes. This mapping works quite well and will be included in the working draft for the data model. We also had an interesting talk by José Enrique Ruiz on his view on Provenance, workflows, and – very important – the “deployer” and “system” provenance for storing all the environment variables that may be needed to rerun the processing of some observational data. Michèle Sanguillon also presented for the first time her extension to the prov Python library (W3C) with extensions from our IVOA Provenance Data Model. We also had interested people from outside the usual provenance-interested people joining in, e.g. from the Astron project. More about our Provenance modelling efforts can be found at IVOA Provenance wiki page.

A world premiere (of sorts) was the first discussion of RegTAP 1.1. RegTAP is a search interface to the VO Registry; it is what TOPCAT or other VO clients uses when you type in keywords to locate services. A fairly direct web-basd interface is our WIRR registry interface. RegTAP will need a bit of a makeover since VOResource, the underlying metadata scheme is currently receiving one, allowing, in particular, for including DOIs and ORCIDs (John Does of this world, rejoice: People can finally uniquely find your data and not that of all the other J. Does) in Registry records and figuring out licenses on data. Licensing may not matter when you use data in a paper but it does matter if you want to redistribute data, e.g. for planetarium programs with catalog data or pretty pictures, or when re-mixing data.

But of course the GAVOistas happily joined the fray on the many other topics discussed, from a standard format for a time series to interoperable authentication, from datalink applications to figuring out if data coming into a program should be treated as a collection of spectra or rather an object catalog – the latter in the context of the upcoming version 10 of the VO’s premier image tool Aladin, which we saw (probably another premiere) demoed. We can already promise you an exciting update!