• Horror vacui begone

    browser and editor

    Mikhail's qrdcreator in a browser and an editor with a dachs start-produced template.

    One of the major usability issues our publishing suite DaCHS has for operators (i.e., people who want publish data) is the “horror vacui”: How do I start a Resource Descriptor (RD – the file DaCHS interprets to build services)?

    I used to recommend to start by having a look at the RDs of our existing services and pick whatever matches best your publication project. But finding a matching service and figuring out what is generic, what's a special property of the concrete data collection, and what's a hack that should not be reproduced isn't straightforward at all, not to mention the fact that some of those RDs have been in maintenance mode for almost 10 years and hence may show deprecated practices.

    Then came the the VESPA implementation workshop last year, during which Mikhail Minin showed me a piece of javascript and HTML (source on github) he has written to overcome the empty editor window. Essentially, Mikhail has built a fairly comprehensive form interface in a web browser that asks people the right questions to eventually write an RD for EPN-TAP (i.e., solar system) resources.

    I had planned to generalise Mikhail's approach to several types of resources supported by DaCHS, ideally inferring the questions to ask from the built-in documentation of mixins and applys. But during the last year, whenever I felt it would be a good time to tackle that generalisation, I quickly gave up again. It was mostly rather trivial stuff such as how to tell apart repeatable metadata (waveband, say) and non-repeatable metadata (instrument, say). But it was bad enough that I quickly found something else to do each time I got started.

    Eventually, I gave up on a menu interface altogether – making it flexible and generatable at the same time seemed a fairly complex problem. But that doesn't mean I forgot about overcoming the horror vacui thing. So, when forms aren't flexible enough for data entry, where do you turn? Right! A text editor.

    Enter dachs start. That's a new DaCHS subcommand that gets you started with your RD. For one, you can list the templates available:

    $ dachs start list
    siap -- Image collections via SIAP1 and TAP
    ssap+datalink -- Spectra via SSAP and TAP, going through datalink
    epntap -- Solar system data via EPN-TAP 2.0
    scs -- Catalogs via SCS and TAP
    

    More templates are planned; siap+datalink, for instance, would cover some frequent use cases. Feel free to mail in requests.

    Once you find a suitable template, create your future resource directory, enter it and run dachs start again, this time passing the name of the template you want:

    $ mkdir ex_data
    $ cd ex_data
    $ dachs start scs
    $ head -16 q.rd | tail -9
    <resource schema="ex_data">
      <meta name="creationDate">2018-04-13T12:34:31Z</meta>
    
      <meta name="title">%title -- not more than a line%</meta>
      <meta name="description">
        %this should be a paragraph or two (take care to mention salient terms)%
      </meta>
      <!-- Take keywords from
        http://astrothesaurus.org/thesaurus/hierarchical-browse/
    

    dachs start uses the directory name as the new schema name and then writes a file q.rd (which is the canonical name for the “main” RD in a resource). Within this file, you'll see things to fill out between pairs of percent signs with short explanantions. Where longer explanations are necessary, embedded comments should help.

    To give you an idea of the intended use: As a vim user, I've put

    augroup rd
      au!
      au BufRead,BufNewFile *.rd imap  /%[^%]*%a
      au BufRead,BufNewFile *.rd imap  cf%
    augroup END
    

    into my ~/.vimrc. That way, while editing the template into an actual RD, hitting F8 takes me to the next thing to be edited; I can then read the instructions, and when I have made up my mind, I can either delete the template element or hit F9 and replace the explanation text with whatever belongs there.

    The command is available starting with the 1.1.3 beta (available now by switching to the beta repo) and will be part of the 1.2 release, planned for early June after the Victoria interop.

    If you have a publication project: just try it out and give feedback. Note that the templates haven't actually been tested yet, and the comments were written by a DaCHS and VO nerd, so they might not always be great either. Thus, when you get stuck: complain early, complain often!

  • Speak out on ADQL 2.1

    If you've always wanted to be part of a standardisation process within the IVOA (and who would not?), the time has rarely been as good as now. Because: We're updating ADQL! Yes! The ADQL you are writing your queries in will receive a few more language elements, and we're carefully trying to heal a few things that turned out to be warts. And while some of the changes are as dull and boring as you may expect standards work to be, on some of them you may wish to have a saying.

    Also, you can try things out – the GAVO data center TAP endpoint at http://dc.g-vo.org/tap already has most of the proposed features, and the new DaCHS beta 1.1.2 (out since last Friday) does, too. So, if you're running DaCHS yourself, you can start playing after switching to the beta repository.

    What's new?

    • You're now supposed to write the standard crossmatch as DISTANCE(ra1, dec1, ra2, dec2)<dist. This replaces the old dance with 1=CONTAINS(POINT(), CIRCLE()) that you've probably learned to hate. Finally: Crossmatching without having to resort to TOPCAT's example menu...

    • ADQL geometries used to require a first argument that would give the reference frame, as in POINT('ICRS', ra, dec). The hope was that services could then automagically make a statement like CONTAINS(point_in_icrs, circle_in_galactic) work as presumably intended. Few services ever did (DaCHS still tries reasonably hard), and when they did, there were all kinds of opaque oddities. One of the most common sources of confusion is the question what a service is supposed to do with POINT('GALACTIC', ra, dec), assuming it knows that ra and dec are in, say, B1950 FK4. Also, is there any expectation that services attempt to do anything beyond a simple rotation (FK4, for instance, rotates noticably against the ICRS, so proper motions would need to get fixed, too)? In all, the frame as a first argument was ill thought-out, and it's been deprecated. Simply don't put in the string-typed first argument any more. POINT(long, lat) does it. True: This, more than ever, calls for an ADQL astrometry library so you can easily convert, at least, between Galactic and ICRS (probably a few more would be useful, too). More on this in some future post.

    • Services should have CAST now. Sometimes you want to turn a number into a string or a string into a timestamp. In such cases, you can write CAST('1991-02-01', TIMESTAMP) now. The details are not quite, excuse me, cast in stone yet, so if you have a use case for this kind of thing, speak up now. The current draft also calls for a TIMESTAMP(tx) function – but since that's really not different from CAST(tx, TIMESTAMP), I'm trying to dissuade people from adding it.

    • Services should have an IN_UNIT function now. That's a nifty thing in particular when you're re-using queries on different services. Just write, say, IN_UNIT(pmra, 'deg/yr') and never worry again if it's arcsec/yr, mas/yr, rad/cy, or whatever. The second argument, by the way, is written according to the Units in the Virtual Observatory standard. It's an optional feature according to the current standard, so perhaps it's too early to party, but I've found this extremely useful, and so I hope we'll see widespread adoption.

    • Services should now have set operations. These are UNION, EXCEPT, and INTERSECT and are useful when you have two queries that result in the same table schema (because they won't work otherwise). Say you have two complex ways to filter rows from the table source, but you want to process both sorts of results further on – you can say then say something like:

           SELECT <whatever complex> FROM
               (SELECT a,b,c FROM source
                 WHERE <crazy stuff>
                 GROUP BY a, b, c) as left
             UNION
               (SELECT a,b,c FROM source
                 WHERE <other crazy stuff>
                 GROUP BY a, b, c) as right
           WHERE <more complex stuff over a, b, and c>
      
      – and similarly, EXCEPT lets you “punch a hole” in a result table.
      Another interesting use case would be to query many tables on a
      service like VizieR in one go; that still works if you make sure the
      tables defined by the sub-queries have the same columns. Given that a
      lot of cross-table operations actually boil down to JOINs and WHERE
      clauses, the set operations are used less that one would expect. But
      if you need them, there's no real alternative (short of downloading
      far too much and performing the operation locally, which of course
      defeats the purpose of TAP).
      
    • Common table expressions (“WITH”). DaCHS doesn't do these yet, and it will only pick them up if someone else implements them first. In the way ADQL 2.1 has them (“nonrecursive”), CTEs are little more than syntactic sugar, and I'm not quite sure if the additional implementation complexity is worth it. If you're curious, check CTEs in the postgres manual. If that makes you drool for WITH in ADQL, let me know. It'll not be too hard to sway me to put them in.

    • Bitwise Operations. That's when integers are treated as bit patterns. If this sounds like nerd stuff to you, well, it happens quite a bit in actual catalogs. See, for instance, Note 3 for the PPMXL. You'd need the flags column described there if you wanted to exclude PPMXL objects that replaced multiple USNO-B1.0 objects (bit 3), you will right now have to write something like MOD(flags,16)>7. That's a bit of magic that everyone will have to think about for a while. With bitwise operations, you'll just write BITWISE_AND(flags,8)=8, which will look familiar to everyone who has used the pattern before (in particular, it's clear we're talking about bit 3). There still is discussion whether bitwise operations are common enough to warrant special syntax – the draft currently says the above should be written as flags&8=8 – or whether the functions DaCHS has at the moment (they're called BITWISE_AND, BITWISE_OR, BITWISE_XOR, and BITWISE_NOT) are good enough.

    • Offset. If you've ever done anything with ADQL, you'll know that SELECT TOP 10 * FROM hipparcos.main ORDER BY parallax DESC will give you the 10 objects with the larges parallaxes. But what if you want the next but 10 closest stars? Well, OFFSET to the rescue:

      SELECT TOP 10 *
      FROM hipparcos.main
      ORDER BY parallax DESC
      OFFSET 10
      

      There is another, more sinister, application for OFFSET, which happens to be the actual reason I've put it into DaCHS' ADQL ages ago: Written as OFFSET 0 several databases use it to denote a barries for the query planner. This is explained to some degree in the class DaCHS TAP example Crossmatch for a Guide Star – which still mentions the first hack I had built into DaCHS to let query authors rein in overzealous query planners.

    • LOWER and ILIKE. ADQL has been extremely weak on the side of text processing, so weak indeed that it wasn't nearly enough to cover the use cases for the registry when it moved to RegTAP. ADQL 2.1 adds two basic features – LOWER, a function that lets people query in a case-insensitive fashion, and ILIKE, an operator that is like LIKE, but again ignores case. While both features are obviously great as soon as people dump any kind of text (think object names) into their databases, I'm not terribly happy with ILIKE, as it does the same as RegTAP's ivoa_nocasematch user defined function, and it's always bad when a two standards forsee two different mechanisms for the same thing.

    • Geometry-typed arguments. CIRCLE and POLYGON now accept POINTs in alternative constructor functions. That is, you can now say CIRCLE(POINT(ra, dec), radius) in addition to the traditional CIRCLE(ra, dec, radius). In itself, that's probably not terribly exciting, but when you have actual POINTs in your database, it's much more compact to write, say:

      SELECT *
      FROM zcosmos.data
      WHERE 0=CONTAINS(
        ssa_targetpos,
        CIRCLE(ssa_location, ssa_aperture))
      

      (which would return rows for those spectra for which the declared aperture does not contain the declared target). Before, you'd had to write some fairly ugly expression involving COORD1 and whatnot in order to achieve the same effect.

    • Boolean expressions. That's another one that's still a bit up in the air. First, the rough goal is to allow boolean values in ADQL-accessible tables, which so far have been a hack at best. In the future, you should be able to say WHERE is_broken=True. However, people coming from other languages will find that odd, and indeed, in python I'd cringe on if is_broken==True:. What I'd expect is if is_broken:. Do we want this in ADQL? Currently, it's in the grammar (more or less like this), but this kind of thing makes it still harder to produce useful syntax error messages. Is it worth it, either way? I'm not sure.

    That about concludes my quick review of the new features of ADQL 2.1. If you'd like to know more, the current draft is on the IVOA document repository, and if you can deal with version control (you should!), you can follow the bleeding edge in the ADQL document in Volute. Discussion happens on the DAL mailing list.

    Update (2018-04-13): Well, as to the CTEs, I couldn't resist after all, and they're in with DaCHS 1.1.3. And I have to say a love them -- they weren't hard to put in, and once they're there they make so many queries a good deal more readable than before. I've even put it a server-defined example for CTEs on the Heidelberg TAP service showcasing a particularly compelling use case.

  • Space and Time not lost on the Registry

    Histogram: observation dates of an image service

    A histogram of times for which the Palomar-Leiden service has images: That's temporal service coverage right there.

    If you are an astronomer and you've ever tried looking for data in the Virtual Observatory Registry, chances are you have wondered “Why can't I enter my position here?” Or perhaps “So, I'm looking for images in [NIII] – where would I go?”

    Both of these are examples for the use of Space-Time Coordinates (STC) in data discovery – yes, spectral coordinates count as STC, too, and I could make an argument for it. But this post is about something else: None of this has worked in the Registry up to now.

    It's time to mend this blatant omission. To take the next steps, after a bit of discussion on some of the IVOA's mailing lists, I have posted an IVOA note proposing exactly those last Thursday. It is, perhaps with a bit of over-confidence, called A Roadmap for Space-Time Discovery in the VO Registry. And I'd much appreciate feedback, in particular if you are a VO user and have ideas on what you'd like to do with such a facility.

    In this post, I'd like to give a very quick run-down on what is in it for (1) VO users, (2) service operators in general, and (3) service operators who happen to run DaCHS.

    First, users. We already are pretty good on spatial coverage (for about 13000 of almost 20000 resources), so it might be worth experimenting with that. For now, the corresponding table is only available on the RegTAP mirror at http://dc.g-vo.org/tap. There, you can try queries like:

    select ivoid from
    rr.table_column
    natural join rr.stc_spatial
    where
      1=contains(gavo_simbadpoint('HDF'), coverage)
      and ucd like 'phot.flux;em.radio%'
    

    to find – in this case – services that have radio fluxes in the area of the Hubble Deep Field. If these lines scare you or you don't know what to do with the stupid ivoids, check the previous post on this blog – it explains a bit more about RegTAP and why you might care.

    Similarly cool things will, hopefully, some day be possible in spectrum and time. For instance, if you were interested in SII fluxes in the crab nebula in the early sixties, you could, some day, write:

    SELECT ivoid FROM
    rr.stc_temporal
    NATURAL JOIN rr.stc_spectral
    NATURAL JOIN rr.stc_spatial
    WHERE
      1=CONTAINS(gavo_simbadpoint('M1'), coverage)
      AND 1=ivo_interval_overlaps(
        6.69e-7, 6.75e-7,
        wavelength_start, wavelength_end)
      AND 1=ivo_interval_overlaps(
        36900, 38800,
        time_start, time_end)
    

    As you can see, the spectral coordiate will, following (admittedly broken) VO convention, be given in meters of vacuum wavelength, and time in MJD. In particular the thing with the wavelength isn't quite settled yet – personally, I'd much rather have energy there. For one, it's independent of the embedding medium, but much more excitingly, it even remains somewhat sensible when you go to non-electromagnetic messengers.

    A pattern I'm trying to establish is the use of the user-defined function ivo_interval_overlaps, also defined in the Note. This is intended to allow robust query patterns in the presence of two intrinsically interval-valued things: The service's coverage and the part of the spectrum you're interested in, say. With the proposed pattern, either of these can degenerate to a single point and things still work. Things only break when both the service and you figure that “Aw, Hα is just 656.3 nm” and one of you omits a digit or adds one.

    But that's academic at this point, because really few resources define their coverage in time and and spectrum. Try it yourself:

    SELECT COUNT(*) FROM (
      SELECT DISTINCT ivoid FROM rr.stc_temporal) AS q
    

    (the subquery with the DISTINCT is necessary because a single resource can have multiple rows for time and spectrum when there's multiple distinct intervals – think observation campaigns). If this gives you more than a few dozen rows when you read this, I strongly suspect it's no longer 2018.

    To improve this situation, the service operators need to provide the information on the coverage in their resource records. Indeed, the registry schemas already have the notion of a coverage, and the Note, in its core, simply proposes to add three elements to the coverage element of VODataService 1.1. Two of these new elements – the coverage in time and space – are simple floating-point intervals and can be repeated in order to allow non-contiguous coverage. The third element, the spatial coverage, uses a nifty data structure called a MOC, which expands to “HEALPix Multi-Order Coverage map” and is the main reason why I claim we can now pull off STC in the Registry: MOCs let databases and other programs easily and quickly manipulate areas on the sphere. Without MOCs, that's a pain.

    So, if you have registry records somewhere, please add the elements as soon as you can – if you don't know how to make a MOC: CDS' Aladin is there to help. In the end, your coverage elements should look somewhat like this:

    <coverage>
      <spatial>3/336,338,450-451,651-652,659,662-663
        4/1816,1818-1819,1822-1823,1829,1840-1841</spatial>
      <temporal>37190 37250</temporal>
      <temporal>54776 54802</temporal>
      <spectral>3.3e-07 6.6e-07</spectral>
      <spectral>2.0e-05 3.5e-06</spectral>
      <waveband>Optical</waveband>
      <waveband>Infrared</waveband>
    </coverage>
    

    The waveband elements are remainders from VODataService 1.1. They are still in use (prominently, for one, in SPLAT), and it's certainly still a good idea to keep giving them for the forseeable future. You can also see how you would represent multiple observing campaigns and different spectral ranges.

    Finally, if you're running DaCHS and you're using it to generate registry records (and there's almost no excuse for not doing so), you can simply write a coverage element into your RD starting with DaCHS 1.2 (or, if you run betas, 1.1.1, which is already available). You'll find lots of examples at the usual place. As a relatively interesting example, the resource descriptor of plts. It has this:

      <updater spaceTable="data" spectralTable="data" mocOrder="4"/>
      <spectral>3.3e-07 6.6e-07</spectral>
      <temporal>37190 37250</temporal>
      <temporal>38776 38802</temporal>
      <temporal>41022 41107</temporal>
      <temporal>41387 41409</temporal>
      <temporal>41936 41979</temporal>
      <temporal>43416 43454</temporal>
      <spatial>3/282,410 4/40,323,326,329,332,387,390,396,648-650,1083,1085,1087,1101-1103,1123,1125,1132-1134,1136,1138-1139,1144,1146-1147,1173-1175,1216-1217,1220,1223,1229,1231,1235-1236,1238,1240,1597,1599,1614,1634,1636,1728,1730,1737,1739-1740,1765-1766,1784,1786,2803,2807,2809,2812</spatial>
    </coverage>
    

    This particular service archives plate scans from the Palomar-Leiden Trojan surveys; these were looking for Trojan asteroids (of Jupiter) using the Palomar 122 cm Schmidt and were conducted in several shortish campaigns between 1960 and 1977 (incidentally, if you're looking for things near the Ecliptic, this stuff might still hold valuable insights for you). Because the fill factor for the whole time period is rather small, I manually extracted the time coverage; for that, I ran select dateobs from plts.data via TAP and made the histogram plot above. Zooming in a bit, I read off the limits in TOPCAT's coordinate display.

    The other coverages, however, were put in automatically by DaCHS. That's what the updater element does: for each axis, you can say where DaCHS should look, and it will then fill in the appropriate data from what it guesses gives the relevant coordiantes – that's straightforward for standard tables like the ones behind SSAP and SIAP services (or obscore tables, for that matter), perhaps a bit more involved otherwise. To say “just do it for all axis”, give the updater a single sourceTable attribute.

    Finally, in this case I'm overriding mocOrder, the order down to which DaCHS tries to resolve spatial features. I'm doing this here because in determining the coverage of image services DaCHS right now only considers the centers of the images, and that's severely underestimating the coverage here, where the data products are the beautiful large Schmidt plates. Hence, I'm lowering the resolution from the default 6 (about one degree linearly) to still give some approximation to the actual data coverage. We'll fix the underlying deficit as soon as pgsphere, the postgres extension which is actually dealing with all the MOCs, has support for turning circles and polygons into MOCs.

    When you have defined an updater, just run dachs limits q.rd, and DaCHS will carefully (preserving your indentation) re-write the RD to contain what DaCHS has worked out from your table (but careful: it will overwrite what was previously there; so, make sure you only ask DaCHS to only deal with axes you're not dealing with manually).

    If you feel like writing code discovering holes in the intervals, ideally already in the database: that would be great, because the tighter the intervals defined, the fewer false positives people will have in data discovery.

    The take-away for DaCHS operators is:

    1. Add STC coverage to your resources as soon as you've updated to DaCHS 1.2

    2. If you don't have to have the tightest coverage declaration conceivable, all you have to do to have that is add:

      <coverage>
        <updater sourceTable="my_table"/>
      </coverage>
      

      to your RD (where my_table is the id of your service's “main” table) and then run dachs limits q.rd

    3. For special effects and further information, see Coverage Metadata in the DaCHS reference documentation

    4. If you have a nice postgres function that splits a simple coverage interval up so the filling factor of a set of new intervals increases (or know a nice, database-compatible algorithm to do so) – please let me know.

  • Say hello to RegTAP

    image: WIRR in the browser

    GAVO's WIRR registry interface in action to find resources with radio parallaxes.

    RegTAP is one of those standards that a scientist will normally not see – it works in the background and makes, for instance, TOPCAT display the Cone Search services matching some key words. And it's behind the services like WIRR, our Web Interface to the Relational Registry (“Relational Registry” being the official name for RegTAP) that lets you do some interesting data discovery beyond what current clients support. In the screenshot above, for instance (try it yourself), I'm looking for cone search services having parallaxes presumably from radio observations. You could now transmit the services you've found to, say, TOPCAT or your own pyvo-based program to start querying them.

    The key point this query is the use of UCDs – these let services declare fairly unambiguously what kind of physics (if you take that word with a grain of salt) they are talking about. In the example, pos.parallax means, well, a parallax, and the percent character is a wildcard (coming not from UCDs, but from ADQL). That wildcard is a good idea here because without it we might miss things like pos.parallax;obs and pos.parallax;stat.fit that people might have used to distinguish “raw” and ”processed” estimates.

    UCDs are great for data discovery. Really.

    Sometimes, however, clicking around in menus just isn't good enough. That's when you want the full power of RegTAP and write your very own queries. The good news: If you know ADQL (and you should!), you're halfway there already.

    Here's one example of direct RegTAP use I came up with the other day. The use case was discovering data collections that give the effective temperatures of components of binary star systems.

    If you check the UCD list, that “physics” translates into data that has columns with UCDs of phys.temperature and meta.code.multip at the same time. To translate that into a RegTAP query, have a look at the tables that make up a RegTAP service: its ”schema”. Section 8 of the standard lists all the tables there are, and there's an ADASS poster that has an image of the schema with the more common columns illustrated. Oh, and if you're new to RegTAP, you're probably better off briefly studying the examples first to get a feeling for how RegTAP is supposed to work.

    You will find that a pair of ivoid – the VO's global resource identifier – and a per-resource table index uniquely identify a table within the entire registry. So, an ADQL query to pick out all tables containing temperatures and component identifiers would look like this:

    SELECT DISTINCT ivoid, table_index
    FROM
    rr.table_column AS t1
    JOIN rr.table_column AS t2
    USING (ivoid, table_index)
    WHERE t1.ucd='phys.temperature'
    AND t2.ucd='meta.code.multip'
    

    – the DISTINCT makes it so even tables that have lots of temperatures or codes only turn up once in our result set, and the somewhat odd self-join of the rr.table_column table with itself lets us say “make sure the two columns are actually in the same table”. Note that you could catch multi-table resources that define the components in one table and the temperatures in another by just joining on ivoid rather than ivoid and table_index.

    You can run this query on any RegTAP endpoint: GAVO operates a small network of mirrors behind http://reg.g-vo.org/tap, there's the ESAC one at http://registry.euro-vo.org/regtap/tap, and STScI runs one at http://vao.stsci.edu/RegTAP/TapService.aspx. Just use your usual TAP client.

    But granted, the result isn't terribly user-friendly: just identifiers and number. We'd at least like to see the names and descriptions of the tables so we know if the data is somehow relevant.

    RegTAP is designed so you can locate the columns you would like to retrieve or constrain and then just NATURAL JOIN everything together. The table_description and table_name columns are in rr.res_table, so all it takes to see them is to take the query above and join its result like this:

    SELECT table_name, table_description
    FROM rr.res_table
    NATURAL JOIN (
      SELECT DISTINCT ivoid, table_index
      FROM
      rr.table_column AS t1
      JOIN rr.table_column AS t2
      USING (ivoid, table_index)
      WHERE t1.ucd='phys.temperature'
      AND t2.ucd='meta.code.multip') as q
    

    If you try this, you'll see that we'd like to get the descriptions of the resources embedding the tables, too in order to get an idea what we can expect from a given data collection. And if we later want to find services exposing the tables (WIRR is nice for that – try the ivoid constraint –, but for this example all resources currently come from VizieR, so you can directly use VizieR's TAP service to interact with the tables), you want the ivoids. Easy: Just join rr.resource and pick columns from there:

    SELECT table_name, table_description, res_description, ivoid
    FROM rr.res_table
    NATURAL JOIN rr.resource
    NATURAL JOIN (
      SELECT DISTINCT ivoid, table_index
      FROM
      rr.table_column AS t1
      JOIN rr.table_column AS t2
      USING (ivoid, table_index)
      WHERE t1.ucd='phys.temperature'
      AND t2.ucd='meta.code.multip') as q
    

    If you've made it this far and know a bit of ADQL, you probably have all it really takes to solve really challenging data discovery problems – as far as Registry metadata reaches, that is, which currently does not include space-time coverage. But stay tuned, more on this soon.

    In case you're looking for a more systematic introduction into the world of the Registry and RegTAP, there are two... ouch. Can I really link to Elsevier papers? Well, here goes: 2014A&C.....7..101D (a.k.a. arXiv:1502.01186 on the Registry as such and 2015A%26C....11...91D (a.k.a. arXiv:1407.3083) mainly on RegTAP.

  • DaCHS 1.1 released

    Today, I have released DaCHS 1.1, with the main selling point that DaCHS should now speak TAP 1.1 (as defined in the current draft).

    First off, if you're not yet on DaCHS 1.0, please read the corresponding release article before upgrading.

    As usual, the general upgrading instructions are available in the operator's guide (in short: do a dachs val ALL before the Debian upgrade). This time, I'd recommend to use the opportunity to upgrade your underlying server to stretch if you haven't done so already. If you do that, please have a look at hints on postgres upgrades. Stretch comes with postgres 9.6 (jessie: 9.4). Postgres upgrades are generally safe, but please take a dump before migrating anyway.

    So, with this out of the way, here's a short list of the major changes from DaCHS 1.0 to DaCHS 1.1:

    • DaCHS now officially requires python 2.7. If this really is a problem for you, please shout – if wouldn't be hard to maintain 2.6 compatibility, but by now we feel there's no reason to bother any more.
    • Now supporting TAP 1.1; in particular, TOP n doesn't trump MAXREC any more, and it doesn't affect OVERFLOW indication, which may break things that used TOP to override DaCHS' default TAP match limit of 2000. Also, TAP_SCHEMA is updated (this happens as a side effect of dachs upgrade).
    • Now serialising spoint, scircle, and friends to DALI 1.1 xtypes (timestamp, point, polygon, circle). Fields explicitly marked with adql:POINT or adql:REGION will still be serialised to STC-S. Do this only if you have no choice (DaCHS has this for obscore and epntap s_region right now).
    • The output column selection is sanitised. This may make for slight changes in service responses, in particular in VOTable formats. See Output Tables in the reference documentation for details if you think this might hit you.
    • DaCHS no longer comes with an outdated version pyparsing and instead uses what's installed on the system. The Debian package further re-uses additional system resources if available (rjsmin, jquery).
    • DaCHS now tries a bit harder to come up with sensible names for SODA result files.
    • map/@source is no longer limited to identifier-like strings; any key that's in your source is fair game.
    • For incremental imports with data that's updated now and then, there's now ignoreSources/@fromdbUpdating.
    • Relative imports from custom code ("import foo" in a custom core, for instance, getting res/foo.py) no longer work. See Importing Modules in the reference documentation for details.
    • This release fixes a severe bug in the creation of obscore metadata from SSAP tables. If you use //obscore#publishSSAPHCD or //obscore#publishSSAPMIXC mixins, update the obscore definitions by running dachs imp -m <rdid>, followed by dachs imp //obscore (the latter is only necessary once at the end).
    • You can now define a footer.html template that's added at the foot of the main page content – with a bit of CSS magic, this lets you overwrite almost anything on DaCHS HTML pages.

    As always, please complain early if something breaks for you; our regression tests can only cover so much. In particular, our support list is there for you.

    Update (2017-12-06): In particular on jessie, you may see that all DaCHS packages are being held back. To resolve this situation, manually say apt-get install python-gavoutils python-gavostc.

« Page 16 / 20 »