• Small Change, Big Win

    Screenshot with the Erratum content (2 lines) highlighted

    That's SCS 1.03 Erratum 2 rendered in my browser with a bit of image processing to celebrate that there's one painful VO legacy less on this world.

    PSA: what follows is VO lore that may be entertaining but will not help you use or publish astronomical data.

    Today, I've made a very small commit to my VO publication package DaCHS (revision 8452):

    --- gavo/web/vodal.py (revision 8451)
    +++ gavo/web/vodal.py (working copy)
    @@ -260,7 +260,6 @@
            version = "1.0"
            parameterStyle = "dali"
            standardId = "ivo://ivoa.net/std/ConeSearch"
    -     defaultOutputFormat = "votable1.1"
    

    One deleted line, small cause, huge effect.

    This story starts with the oldest „operational“ VO standard, Simple Cone Search, which was formally published in 2008 but really got its current shape a lot earlier.

    I've not been there back then, but I think the authors expected that clients would be parsing the VOTables that the services were returning using something called XML binding. That, well, was a technique where code was generated from an XML schema, and only instance documents conforming to that exact schema could be parsed with that code.

    That is of course the opposite of the golden rule of interoperability (“be strict in what you produce and lenient in what you accept”) and thus would have been a terrible implementation choice for interoperable clients (and I believe nobody ever tried it). But somehow – or that is my explanation – the XML binding reasoning translated into the requirement that SCS services could only return VOTable 1.0 or VOTable 1.1, and that made it into the standard. It was hence the law. And that it DaCHS had to keep alive VOTable 1.1 for writing (which the above commit of course doesn't remove, but I can remove it now any time I feel like it). And that it couldn't do a lot of useful things that required features not present in VOTable 1.1.

    Nobody dared to touch the problem for about a decade, as it was actually unclear whether some ancient code might still be doing useful work with SCS and XML binding. And I shouldn't be scoulding them after I have recently broken ESO examples under the assumption that “aw, nobody's gonna do this“. Then, starting about five years ago, we had a couple of discussions at various conferences about how we might bring SCS into the present VO (where it, it has to be said, sticks out a bit for several other reasons, too, like its funky error reporting and the funny UCDs it uses). But these weren't easy: What exactly are we allowed to break within a minor version under the above assumption (“aw, nobody… “)? If we do a major version, how do we plan for co-existence for two parallel major version?

    Well: For the version restriction, in the end a simple Erratum was enough. On January 26, 2022, the IVOA Technical Coordination Group accepted SCS 1.03 Erratum 2. And now I can return whatever VOTable version suits me. Phewy.

    I can now have GROUPs in GROUPs (which I need to annotate photometry), I can finally return tables with my old proposal for STC in VOTable in SCS results (where they would have mattered most – not that anyone cares any more, as that ship has sailed somewhere completely different).

    Hey, I can have xtypes. Doesn't mean anything to you? Well, try this: In TOPCAT, open VO/Cone Search. Type “Constellations” and select the “cslt cone“ service. Run a query for some part of the sky, with a size of a few 10s of degrees. Open a sky plot, and in there, do Layers → Add Area Control, and in that control select the table you have just pulled in. Presto: You'll see the constellation boundaries without further configuration, and that's because TOPCAT has the xtype to figure out that the odd numbers it sees are really the vertex coordinates of a spherical polygon in DALI serialisation.

    Not a big deal, you say? Perhaps. But lots of small deals accumulated make the difference between what you can do and what you cannot, in particular across services (which is what the VO is about).

    Removing the erroneous constraint on VOTable versions in SCS opened the standard up for quite a few small deals. Thanks, TCG!

  • Towards Data Discovery in pyVO

    When I struggled with ways to properly integrate TAP services – which may have hundreds or thousands of different resources in one service – into the VO Registry without breaking what we already had, I realised that there are really two fundamentally different modes of using the VO Registry. In Discovering Data Collections's abstract I wrote:

    the Registry must support both VO-wide discovery of services by type ("service enumeration") and discovery by data collection ("data discovery").

    To illustrate the difference in a non-TAP case, suppose I have archived images of lensed quasars from Telescopes A, B, and C. All these image collections are resources in their own right and should be separately findable when people look for “resources with data from Telescope A“ or perhaps “images obtained between 2011-01-01 and 2011-12-31”.

    However, when a machine wants to find all images at a certain position, publishing the three resources through three different services would mean that that machine has to do three requests where one would work just as well. That is very relevant when you think about how the VO will evolve: At this point there are 342 SIAP services in the VO, and when you read this, that number may have grown further. Adding one service per collection will simply not scale when we want to keep the possibility of all-VO searches. Since I claim that is a very desirably thing, we need to enable collective services covering multiple subordinate resources.

    So, while in the first (“data discovery”) case one wants to query (or at least discover) the three resources separately, in the second case they should be ignored, and only a collective “images of lensed quasars” service should be queried.

    The technical solution to this requirement was creating “auxliary capabilities” as discribed in the endorsed note on discoving data collections cited above. But these of course need client support; VO clients up to now by and large do service enumeration, as that has been what we started with in the VO Registry. Client support would, roughly, mean that clients would present their users with data collections, and then offer the various ways to to access them.

    There are quite a number of technicalities involved in why that's not terribly straightforward for the “big” clients like TOPCAT and Aladin (though Aladin's discovery tree already comes rather close).

    Now that quite a number of people use pyVO interactively in jupyter notebooks, extending pyVO's registry interface to do data discovery in addition to the conventional service enumeration becomes an attractive target to have data discovery in practice.

    I have hence created pyVO PR #289. I think some the rough edges will need to be smoothed out before it can be merged, but meanwhile I'd be grateful if you could try it out already. To facilitate that, I have prepared a jupyter notebook that shows the basic ideas.

    Followup (2023-12-15)

    I have just prepared a slightly updated version of the notebook.

    To run it while the PR is not merged, you need to install the forked pyVO. In order to not clobber your main installation, you can install astropy using your package manager and then do the following (assuming your shell is bash or something suitably similar):

    virtualenv --system-site-packages try-discoverdata
    . try-discoverdata/bin/activate
    cd try-discoverdata
    git clone https://github.com/msdemlei/pyvo
    cd pyvo
    git checkout add-discoverdata
    python3 setup.py develop
    ipython3 notebook
    

    That should open a browser window in which you can open the notebook (you probably want to download it into the pyvo checkout in order to make the notebook selector see it). Enjoy!

  • DaCHS 2.5: Check your UCDs

    DaCHS logo on top of a map of UCDs

    In the background of the DaCHS 2.5 release picture: UCDs grabbed from the Registry. The factual background: DaCHS 2.5 will now moan at you when you invent or mistype UCDs

    This afternoon, I have released DaCHS 2.5. As usual, I will discuss the more important changes in a blog post – this one.

    A change many of you will not like too much is that DaCHS now validates UCDs you give it, and it will warn you when you do not follow the UCD rules. This may seem like nit-picking, but as blind discovery is on the verge of becoming usable in the VO, making sure these strings actually are what they should be is becoming operationally important: If I want to find resources that give errors for their photometry, I have to know whether it's stat.error;phot.mag.b or phot.mag.b;stat.error, or else I will miss half the resources out there.

    So, I'm sorry if DaCHS starts complaining about half of your RDs after you update, but it's for a good cause. And don't feel bad about the complaints: DaCHS complained about close to half of my RDs after I had put in that feature.

    By the way, this comes as part of a larger effort on the side of the Operations IG to improve the validity of UCDs and units in the VO, an effort that has unearthed bugs in the SSAP and SLAP specifications in that they require UCDs forbidden by the UCD standard. DaCHS 2.5 still follows SSAP and SLAP, and hence external tools like stilts will protest because of bad UCDs even if DaCHS is happy. Errata for the specifications are being worked on, and once they are accepted, DaCHS and stilts will finally agree on UCD validity, or so I hope.

    Code-wise, a much more intrusive change was that asynchronous services (in particular, async TAP) now use the same formalism for parsing parameters as their synchronous counterparts. It may seem odd that that hasn't been the case up to now, but there were good reasons for that; for instance, with async, people can post incomplete parameter sets that would be rejected by normal sync processing.

    Unless you are running User UWS services, you should not notice anything. If you do run User UWS services, please contact me before upgrading. I would like to work with you on how these should look like in the future.

    Another change that might break your services is that DaCHS now actually complies to VOUnits, which has always forbidden whitespace of all kinds in unit strings. DaCHS, on the other hand, has foolishly encouraged putting whitespace between scale factors and pure units, as in 1e-10 m. That's not interoperable, and hence DaCHS now rejects such units. This may lead to hidden failures when dachs val doesn't notice something is a unit, and things only break during execution. I'm aware of one place where that's relevant: spectral cutout services that need to know the spectral unit If you're running those, make double sure that the spectralUnit in the SSAP mixin does not contain any whitespace. It's 0.1nm according to VOUnits, not 0.1 nm.

    An update that should silently make your services more compliant is that DaCHS' representation of EPN-TAP is updated to what is currently under IVOA review. After you upgrade, DaCHS will try to update your EPN tables' metadata, which in turn should make stilts taplint a lot happier. It will also make DaCHS pass on the new, IVOA table utype to the Registry, which is how people should in the future find EPN-TAP data.

    DaCHS now also contains some code that may help you import data from HDF5 files. For one, there is the HDF5 grammar, which rather directly pulls data from HDF5s written by astropy or vaex. But, really: HDF5 is a rather low-level format not particularly well suited for relational data, and it is virtually impossible to write generic code for doing something sensible with it. The two flavours DaCHS supports have very little in common, and it is therefore almost certain that if you have HDF5s coming from somewhere else, hdf5Grammar will not understand them. Still, let us know what you've got, we may be able to put support for it in.

    Hdf5grammar is written in Python, and thus imports perhaps a few thousand rows per second. For Gigarow-sized data collections, that's nowhere near fast enough, and hence for vaex-written HDF5s, there is booster support. As before, if you have bulk data in HDF5 that you want to put into a database and that was not written by vaex, let us know and we'll see what we can do.

    A surprisingly minor change enabled DaCHS to deal with materialised views, database views that are turned into actual tables by postgres. See the corresponding section in the tutorial for how you can use them. We do not have any materialised views in our Heidelberg data center yet. So, if you use them and notice something is clunky, your feedback is particularly appreciated.

    There are many smaller changes and improvements; let me mention what the changelog euphemistically calls ”better systemd integration”, which really means that so far systemctl restart dachs simply didn't do anything at all. Apologies. And shame on everyone who was bewildered but failed to report this to dachs-support.

    Also, you can use float arrays in boosters now, and DaCHS' ADQL has just leared about COALESCE. That's a SQL feature that lets you deal sensibly with NULLs in some cases: COALESCE(arg1, arg2, ...) will return the first non-NULL argument it encounters. That may sound like a slightly exotic function. Until you need it, at which point you wonder how ADQL could reach its ripe age without COALESCE.

    Finally, let me mention something that is not part of the release, though it is DaCHS-related and is new since the last release: I have cleaned up the access log processing machinery we have used in Heidelberg in the past 15 years or so, and I have packaged it up for general consumption. It is, of course, a DaCHS RD that you can just check out and use in your own DaCHS installation if you have to keep access logs and want to do that with at least some basic respect for your user's rights. See http://docs.g-vo.org/DaCHS/tutorial.html#access-logs for details.

  • We'd still have IDL

    I am newly appointed as a member of the topic group for Federated Infrastructures of DIG-UM (that's an acronym for Digital Transformation in the Research on Universe and Matter), a “bottom-up organization for synergetic research on the digital transformation” (as it says in their Guidelines) in the fields covered by what the German Ministry for Research (BMBF) funds as part of its “Erforschung von Universum und Materie” (ErUM) programme. Since GAVO's work has largely been funded through that programme and its predecessors, I feel obliged to overcome my natural aversion against committee work in this case.

    The first thing I am trying to do in that function is explain the VO to our partners, which come from different branches of physics ranging from astroparticle physiscs (where I still feel relatively at home, though I haven't quite got around to figuring out root, a programme and format that's really common there) to accelerator physics to the Komitee Forschung mit nuklearen Sonden und Ionenstrahlen (KFSI), where people are probing into solid state matter using positron beams, which to me sounds (a) cool and (b) as if you'd better have your 511 keV-protective suit on when visiting them.

    A part of this was summarising what I think are the VO's most difficult challenges at this point. Probably the most pressing of those is the problem that we now routinely have data that is so large that moving it around in full is not a good idea. Now, for large catalogues, I think TAP and ADQL are a good basis for giving people tools for remote analysis, so there I'd say all that is needed is detail work.

    In contrast, for collections of array-like (images, say, but what I'm saying would also apply for things like a bulk analysis of a big collection of spectra) data, we do not have anything remotely comparable; the best you can do is make a remote cutout if you're lucky and your operator has implemented SODA. Doing something like “give me all spectra that have a strong Hα feature”, for instance, requires you to download all spectra, or at least the lines in question.

    Most data providers at this point respond to this challenge is to give their users jupyter hubs next to the data, which boils down to letting people write and execute Python scripts on the data providers' boxes from within a web browser. Admittedly, this works rather nicely for the moment, but I consider this a massive regression over the current VO, for at least the following reasons:

    • Lock-in: You cannot in general transport the jupyter notebooks you write from one provider to the next, because the execution environments are massively different (Python and package versions, package availability, data access).
    • Ephemeral: You probably will not even be able to execute the notebook reliably after the next update of the provider's platform: Python evolves relatively quickly, and many of the packages evolve even faster.
    • Undiscoverable: Nobody currently as figured out how these things could sensibly be registered such that you could ask: “Give me all execution environments I can use on data from ivo://dc.g-vo.org/tap.” Not that many are trying, given all the other problems.
    • Browser-based: Web browsers are probably the most broken and least sustainable element in current computing; if you've ever tried to tweak one of the “major browsers” to your liking, you probably know what I mean. With jupyter hubs, not only do I have to work through one of these horrible “major browsers”, the data providers also control what code is being executed in it. If they don't let me edit in vi, I can't edit in vi. Full stop[1].
    • Central control: More generally, with the current VO and its API endpoints, users get to choose what tools they use. If you'd like to use the APIs from lua or Haskell or want to cobble together stilts and shell script, go ahead. Yes, there is some initial effort to parse VOTable and perhaps support the more subtle aspects of TAP, but that's still not unreasonable. With the “platforms”, it is up to the service operators what tools they let you use.

    As a big fan of Python, I'm happy this platform thing happened exactly in the moment when Python was all the fashion (at least in Astronomy). But Python certainly isn't the end of history. People will think of smarter things (arguably, they already have), and very certainly the expectation that one tool fits all is very wrong.

    All that went through my head this morning when riding to work. And then a slogan crossed my mind that I liked so much for bringing the Platform Problem to a point that I wrote this entire post so I could publish it:

    If science platforms had come around 15 years ago, we'd all still be stuck with IDL.

    [1]Ok, there's greasemonkey-like hacks, but that's really to fragile to seriously consider.
  • The 2021 Southern Spring Interop

    A Venn diagram of product types that just doesn't work.

    A contribution for the ”things that didn't work out” (“Arbeiten, die zu keiner Lösung geführt haben”) section in our reports to BMBF: an attempt to systematise product types at the last Interop. I've made a new proposal at this Interop, and there is reason to hope it will fare better.

    Last night, the second IVOA Interop conference of 2021 came to an end; I'm calling it ”southern spring” because notionally, it happened in Cape Town, back to back with this year's ADASS. In reality, it was again an online event, and so, in keeping up with the tradition established in the pandemic times, the closing event was around midnight CET. I cannot say I will miss these late-night events, although I would not go as far as some people at the conference who quipped they'd prefer the airport security checks to having to sit through another zoom marathon.

    My contributions at this interop again had a clear focus on semantics, for instance with my public confession that my attempt to systematise “product types” at the last interop was entirely misguided; trying to force concepts like “time series”, “spectrum“ or “image” into a tree does not lead to anything that actually works for what this is intended to do, that is, helping people find the sort of data they are after for a particular purpose, or helping clients route data products to other clients better suited to process them. I will now try a restart using SKOS, a plan that was met with a lot more agreement than that previous attempt. Some entertainment at the side was provided by the realisation that a “time-image cube“ is normally called a movie. Next time I'll take in moving pictures, I'll find out what people say when I claim to investigate a time cube.

    Another talk that took up a topic from the last Interop's Semantics session was about making an IVOA vocabulary of object types based on the work done within the CDS over the last 40 year or so. This certainly is just the beginning of a longer effort, not the least because the current concepts severely fall short in the area of the solar system. But it's a start, and there's plenty of time to elaborate this before it will go through a review, presumably with the next version of Obscore.

    Also semantics-related, but over in the session of the Operations interest group, Mark Taylor reported on his activities to evaluate the standards adherence of semantics information in published tables. This activity is what had triggered me to make DaCHS validate UCDs assigned to columns in summer, something that I expect will result in quite few diagnostics when DaCHS operators upgrade to DaCHS 2.5 (expected for November). But that's fine: making it more likely that computers will actually recognise a, say, error in proper motion for what it is is undoubtedly a good thing. I'm therefore glad that there is almost a million “good” UCDs out there and a lot fewer somehow “bad”. I had expected much worse after my realisation that my own annotations left a lot to be desired in summer. By now, the only bad UCDs I'm still pushing out are the ones mandated by SSAP and SLAP. The contradictions between those standards and UCD are going to be addressed with Errata in the coming months.

    My talk in the third Apps session on Thursday afternoon still had some relationship with Semantics; it was a quick show and tell on the enhancements to WIRR I had reported on here in July, and it in particular showcased obtaining UCD constraints by full-text searching the rr.table_column table in my RegTAP service and the selection through UAT concepts. Satisfyingly in some way, it were these topics that people took up in the discussion after the talk. Less satisfyingly, people playing with the thing afterwards turned up something that has the alarming taste of a bug in the new MOC operations in pgsphere. Ouch.

    This segues into the realm of Registry, where there was no actual session but a rather well-attended side meeting in the gathertown instance we could take over from ADASS (that, incidentally, was substantially better attended than during the previous meetings). There, I mainly presented (and explained) my proposed changes to pyVO's registry interface currently living in a private branch in my fork on github. I will write a bit more on that around the time I will turn that into a PR.

    Another outcome of this was that there was some interest to turn the note on documents in the Registry – which is what feeds VOTT – into either an endorsed note or perhaps a Recommendation of the Registry WG.

    My fourth “proper” (in the rather twisted sense of: in a zoom session) talk was an attempt to finally do something about the problems pointed out in my caproles note lamenting that our current service registration patterns are fundamentally flawed. It proposed some ways to to get VOSI availability fixed, and the outcome was that we probably will drop what we currently require in that field, not the least because these requirements are cheerily ignored by 98% of the resources in the Registry.

    Those were again three fairly long days, usually starting with sessions around 7:00 CET and ending with sessions around midnight. Which is clearly not healthy. But on the other hand, it somehow does convey a physical sense of the global nature of the Virtual Observatory, on which people in many, many time zones work. And that, I have to say, still is something I do appreciate.

« Page 7 / 20 »