• The Case of the Disappearing Bits

    [number line with location markers]

    Every green line in this image stands for a value exactly representable in a floating point value of finite size. As you see, it's a white area out there [source]

    While I was preparing the publication of Coryn Bailer-Jones' distance estimations based on Gaia eDR3 (to be released about tomorrow), Coryn noticed I was swallowing digits from his numbers. My usual reaction of “aw, these are meaningless anyway because your errors are at least an order of magnitude higher” didn't work this time, because Gaia is such an incredible machine that some of the values really have six significant decimal digits. For an astronomical distance! If I had a time machine, I'd go back to F.W. Bessel right away to make him pale in envy.

    I'm storing these distances as PostgreSQL REALs, so these six digits are perilously close the seven decimal digits that the 23 bits of mantissa of single precision IEEE 754 floats are usually translated to. Suddenly, being cavalier with the last few bits of the mantissa isn't just a venial sin. It will lose science.

    So, I went hunting for the bits, going from parsing (in this case C's sscanf) through my serialisation into Postgres binary copy material (DaCHS operators: this is using a booster) to pulling the material out of the database again. And there I hit it: the bits disappeared between copying them in and retrieving them from the database.

    Wow. Turns out: It's a feature. And one I should have been aware of in that Postgres' docs have a prominent warning box where it explains its floating point types: Without setting extra-float-digits it will cut off bits. And it's done this ever since the dawn of DaCHS (in postgres terms, version 8.2 or so).

    Sure enough (edited for brevity):

    gavo=$ select r_med_geo from gedr3dist.main
    gavo-$ where source_id=563018673253120;
        1430.9
    
    gavo=$ set extra_float_digits=3;
    gavo=$ select r_med_geo from gedr3dist.main
    gavo-$ where source_id=563018673253120;
     1430.90332
    

    Starting with its database schema 26 (which is the second part of the output of dachs --version), DaCHS will configure its database roles always have extra_float_digits 3; operators beware: this may break your regression tests after the next upgrade.

    If you want to configure your non-DaCHS role, too, all it takes is:

    alter role (you) set extra_float_digits=3,
    

    You could also make the entire database or even the entire cluster behave like that; but then losing these bits isn't always a bad idea: It really makes the floats prettier while most of the time not losing significant data. It's just when you want to preserve the floats as you get them – and with science data, that's mostly a good idea – that we just can't really afford that prettyness.

    Update (2021-04-22): It turns out that this was already wrong (for some meaning of wrong) when I wrote this. Since PostgreSQL 12, Postgres uses shortest-precise by default (and whenever extra_float_digits is positive). The official documentation has a nice summary of the problem and the way post-12 postgres addresses it. So: expect your float-literal-comparing regression tests to break after the upgrade to bullseye.

  • Sofa instead of Granada

    Screenshot from an online talk

    Gesticulating wildly to a computer is what happens in an online conference. To me, at least. Let's hope nobody watched me through the window.

    It was already in the wee hours of Friday last week (CET) when the second "virtual Interop" had its rather unceremonious closing ceremony. Its predecessor in May had about it an air of a state of emergency. For instance, all sessions were monothematic. That was nice on the one hand, because a relatively large part of the time was available for discussion – which, really, is what the Interops are about. But then Interops are also about noticing what everyone else in the Virtual Observatory is cooking up, for which the short-ish talks we usually have at Interops work really well.

    In contrast to that first Corona Interop, this second one, replacing what would have taken place in Granada, Spain, had a much more conventional format, which again accomodated many talks. But of course, this made one feel the lack of possibilities to quickly hash out a problem during a coffee break or in a spontaneous splinter quite a bit more.

    Be that as it may, I would like to give you some insights on what I'm currently up to at the IVOA level; I am grateful for any feedback you can give on any of these topics.

    Given that I currently chair the Semantics Working group, there was a natural focus on topics around vocabularies, and I gave two talks in that department. The one in DAL (DAL is the working group that builds the actual access protocols such as TAP or SIAP) was mainly on Datalink-related aspects of my Vocabularies in the VO 2 draft (VocInVO2), which in particular was an opportunity to thank everyone involved in the Vocabulary Enhancement Proposals we have been running this last year (all of which were about Datalink and hence closely tied to DAL). One thing I was asking for was reviews on a github pull request that would make the bysemantics method of Datalink accesses semantics-aware; basically, as intended by the original Datalink authors, when asking for #calibration links, this will also return, say, #bias links. If you can spare a moment for this: Please do!

    Another thing I tried to raise some interest for is the proposed vocabulary of product types; this, I think, should eventually define what people may put into the dataproduct_type column of Obscore results, and there are related uses in Datalink and, believe it or not, the registration of SSAP (spectral) services. A question Alberto raised while I was discussing that made me realise I forgot to mention another vocabularies-related development relevant for DAL: I've put the gavo_vocmatch ADQL user-defined function into DaCHS. It lets you match something against a term or its narrower terms, referencing an IVOA vocabulary. For instance, if we had different sorts of time series (which, of course, would be odd for obscore that has the o_ucd column for this kind of thing), you could, using ADQL, still get all time series by querying:

    SELECT TOP 5 *
    FROM ivoa.obscore
    WHERE
      1=gavo_vocmatch(
        ’product-type’,
        ’timeseries’,
        dataproduct_type)
    

    Here, the first argument is the vocabulary name (whatever is after the http://www.ivoa.net/rdf in the vocabulary URL), the second the “root” term, and the third the column to match against. Since postgres, for now, isn't aware of IVOA vocabularies, the second argument must be a literal string rather than, say, an expression involving columns.

    I gave a second semantics-related talk in the Registry session. That had its focus on the Unified Astronomy Thesaurus (UAT), from which people should pick the subject keywords in the VO Registry (actually, they should pick from its representation at http://www.ivoa.net/rdf/uat). I'll probably blog about that a little more some other time. For now, let me recommend a little UAT-based game on my Semantics Based Registry Browser sembarebro: Choose two terms that are pretty far apart (like, perhaps, ionized-coma-gases and cosmic-background-radiation) and then try to join the two sub-graphs. Warning: This may waste your time. But it will acquaint you with the UAT, which may be a good thing.

    In that second talk, I also mentioned a second draft vocabulary I've put up in the past six months, http://www.ivoa.net/rdf/messenger. This builds upon the terms for VODataService's waveband element, which enumerated certain flavours of photons (like Radio, Optical, or X-ray). Now that we explore other messengers as well and have more and more solar system resources in the Registry, I'm arguing we ought to open up things by making “Photon” explicit in there and then adding Neutrinos and, later, other messengers. I've received a certain amount of pushback there on mixing the electromagnetic spectrum with particle types; on the other hand, the hierarchical nature of our vocabularies would, I think, let us smartly get away with that.

    Speaking about solar system resources, I'm also listed as an author on Stéphane Erard's talk on EPN-TAP and EPNCore v2.0, probably due to my involvement in finally bringing EPN-TAP into the IVOA document repository. I've already talked about that in a 2017 post on this blog – and again, if you're interested in solar system data, this would be a good time to review the EPN-TAP working draft.

    Talking about things regluar readers of this blog will have heard of: September's Crazy Shapes post I've referenced in a talk on MOCs in pgsphere, together with a fervent appeal to data centers to become involved in pgsphere maintenance.

    And then there was my colleague Margarida's talk on LineTAP, a proposal to obsolete the little-used SLA protocol (which lets people search for spectral lines) with something combining the much more successful VAMDC with our beloved TAP. Me, I'm in this because I'd like to bring TOSS data closer to VAMDC – but also because having competing infrastructures for the same thing sucks.

    And finally, I gave a talk I've called Data Model Posture Review in a session of the Data Models working group; I was somewhat worried that given its rather skeptical outlook it wouldn't be really well-received. But in fact quite a few people shared my main conclusions – and perhaps it was another step towards resolving my decade-old spot of pain: that the VO still doesn't offer tech to reliably bring two catalogues to the same epoch without human intervention.

    With this number of talks I've been involved in, I'm essentially back to the level of a normal Interop. Which means I've been fairly knocked-out on Friday. And I can't lie: I still regret I didn't get to spend a few more warm days in Granada. Corona begone!

  • DaCHS 2.2 is out

    Image: DaCHS "entails" 2.2

    DaCHS 2.2 adds support for what simple semantics we currently do in the VO. Which is a welcome excuse to abuse one of the funny symbols semanticians love so much.

    Today, I've released DaCHS 2.2, the second stable version of DaCHS running on Python 3. Indeed, we have ironed out a few sore spots that have put that “stable” into question, especially if you didn't run things on Debian Buster. Mind you, playing it safe and just going for Debian is still recommended: Compared to the Python 2 world, where things largely didn't break for a decade, the Python 3 universe is still shaking out, and so the versions of dependencies do matter. It's actually fairly gruesome how badly pyparsing 2.4 will break DaCHS. But that's for another day.

    Despite this piece of fearmongering, it'd be great if you could upgrade your installations if you are running DaCHS, and it's pretty safe if you're on Debian buster anyway (and if you're running Debian in the first place, you should be running buster by now).

    Here are the more notable changes in this release:

    • DaCHS can now (relatively easily) write time series in the form of what Ada Nebot's Time Series Annotation note proposes. See the tutorial chapter on building time series for how to do that in practice. Seriously: If you have time series to publish, by all means try this out. The specification can still be fixed, and so this is the perfect time to find problems with the plan.
    • The 2.2 release contains support for the MOC ADQL functions mentioned in the last post on this blog. Of course, to make them work, you will still have to acquaint your database with the new functionality.
    • DaCHS has learned to use IVOA vocabularies as per the current draft for Vocabularies in the VO 2. The most visible effect for you probably is that DaCHS now warns if your subject keywords are not taken from the Unified Astronomy Thesaurus (UAT) – which they almost certainly are not, because the actual format of these keywords is a bit funky. On the other hand, if you employ the “plain” root page template (see the root template in our templating guide if you are not sure what I am talking about here), you will get nice, human-friendly labels for the computer-friendly terms you ought to put into subjects. In case you don't bother: Given I'm currently serving as chair of the semantics working group of the IVOA, the whole topic will certainly come up again soon, and at that point I will probably also talk about another semantics-related newcomer to DaCHS, the gavo_vocmatch ADQL UDF.
    • There is a new command dachs datapack for interacting with frictionless data packages. The idea is that you can say dachs datapack create myres/q myres.pack and obtain an archive of all that is necessary to re-create myres on another DaCHS installation, where you would say dachs datapack load myres.pack. Frankly, this isn't much different from just tarring up the resource directory at this point, except that any cruft that may have accumulated in the directory is skipped and there is a bit of structured metadata. But then interoperability always starts slowly. Note, by the way, that this certainly does not teach DaCHS to do anything sensible with third-party data packages; while I've not thought hard about this, as it seems a remote use case, I am pretty sure that even the “tabular data packages” that refine the rough general metadata quite a bit simply have nowhere near enough metadata to create a useful VO resource or TAP table.
    • As part of my never-ending struggle against bitrot (in case you've always wondered what “curation” means: that, essentially), I'm running dachs val -vc ALL in my own data center once every month. This used to traverse the file system to locate all RDs defined on a box and then make sure they are still ok and their definitions match the database schema. That behaviour has now changed a bit: It will only check published RDs now. I cannot lie: the main reason for the change is because on my production machine the file system traversal has taken longer and longer as data accumulated. But then beyond that there is much less to worry when unpublished gets a little bit mouldy. To get back the old behaviour of validating all RDs that are reachable by the server, use ALL_RECURSE instead of ALL.
    • DaCHS has traditionally assumed that you are running multiple services on one site, which is why its root page is rendered over a service that exposes metadata on local resources. If that doesn't quite work for how you use DaCHS – perhaps because you want to have your own custom renderers and data functions on your root page, perhaps because you only have one browser-based service and that should be the root page right away –, you can now override what is shown when people access the root URI of your DaCHS installation by setting the [web]root config item to the path of the resource you want as root (e.g., myres/q/s/fixed when the root page should be made by the fixed renderer on the service s within the RD myres/q).
    • Scripting in DaCHS is a powerful way to execute python or SQL code when certain things happen. That seems an odd thing to want until you need it; then you need it badly. Since DaCHS 2.2, scripts executed before or after the creation of a table, before its deletion, or after its meta data has been updated, can sit on tables (where they have always belonged). Before, they could only be on makes (where they can still sit, but of course they are then only executed if the table is operated through that particular make) and RDs (from where they could be copied). That latter location is now forbidden in order to free up RD scripts for later sanitation. Use STREAM and FEED instead if you really used something like that (and I'd bet you don't).
    • Minor behavioural changes:
      1. Due to a bug, you could write things like <schema foo="bar">my_schema<schema>, i.e., have attributes on attributes written in element form. That is now flagged as an error. Since that attribute was fed to the embedding element, you might need to add it there.
      2. If you have custom flot plots in one of your templates (and you don't if you don't know what I'm talking about), you now have to set style to Points or Lines where you had usingIndex 0 or 1 before.
      3. The sidebar template no longer has links to a privacy policy (that few bothered to fill out). See extra sidebar items in the tutorial on how to get them back or add something else.

    The most important change comes last: The default logo DaCHS shows unless you override it is no longer the GAVO logo. That's, really, been inappropriate from the start. It's now the DaCHS logo, the thing that's in this posts's article image. Which isn't quite as tasteful as the GAVO one, true. But I trust we'll all get used to it.

  • Crazy Shapes in TAP

    OpenNGC shapes

    A complex shape from OpenNGC: MOCs need not be convex, or simply connected, or anything.

    So far when you did spherical geometry in ADQL, you had points, circles, and polygons as data types, and you could test for intersection and containment as operations. This feature set is a bit unsatisfying because there are no (algebraic) groups in this picture: When you join or intersect two circles, the result only is a circle if one contains the other. With non-intersecting polygons, you will again not have a (simply connected) spherical polygon in the end.

    Enter MOCs (which I've mentioned a few times before on this blog): these are essentially arbitrary shapes on the sky, in practice represented through lists of pixels, cleverly done so they can be sufficiently precise and rather compact at the same time. While MOCs are powerful and surprisingly simple in practice, ADQL doesn't know about them so far, which limits quite a bit what you can do with them. Well, DaCHS would serve them since about 1.3 if you managed to push them into the database, but there were no operations you could do on them.

    Thanks to work done by credativ (who were really nice to work with), funded with some money we had left from our previous e-inf-astro project (BMBF FKZ 05A17VH2) on the pgsphere database extension, this has now changed. At least on the GAVO data center, MOCs are now essentially first-class citizens that you can create, join, and intersect within ADQL, and you can retrieve the results. All operators of DaCHS services are just a few updates away from being able to offer the same.

    So, what can you do? To follow what's below, get a sufficiently new TOPCAT (4.7 will do) and open its TAP client on http://dc.g-vo.org/tap (a.k.a. GAVO DC TAP).

    Basic MOC Operations in TAP

    First, let's make sure you can plot MOCs; run

    SELECT name, deepest_shape
    FROM openngc.shapes
    

    Then do Graphics/Sky Plot, and in the window that pops up then, Layers/Add Area Control. Then select your new table in the Position tab, and finally choose deepest_shape as area (yeah, this could become a bit more automatic and probably will over time). You will then see the footprints of a few NGC objects (OpenNGC's author Mattia Verga hasn't done all yet; he certainly welcomes help on OpenNGC's version control repo), and you can move around in the plot, yielding perhaps something like Fig. 1.

    Now let's color these shapes by object class. If you look, openngc.data has an obj_type column – let's group on it:

    SELECT
      obj_type,
      shape,
      AREA(shape) AS ar
    FROM (
      SELECT obj_type, SUM(deepest_shape) AS shape
      FROM openngc.shapes
      NATURAL JOIN openngc.data
      GROUP BY obj_type) AS q
    

    (the extra subquery is a workaround necessary because the area function wants a geometry or a column reference, and ADQL doesn't allow aggregate functions – like sum – as either of these).

    In the result you will see that so far, contours for about 40 square degrees of star clusters with nebulae have been put in, but only 0.003 square degrees of stellar associations. And you can now plot by the areas covered by the various sorts of objects; in Fig. 2, I've used Subsets/Classify by Column in TOPCAT's Row Subsets to have colours indicate the different object types – a great workaround when one deals with categorial variables in TOPCAT.

    MOCs and JOINs

    Another table that already has MOCs in them is rr.stc_spatial, which has the coverage of VO resources (and is the deeper reason I've been pushing improved MOC support in pgsphere – background); this isn't available for all resources yet , but at least there are about 16000 in already. For instance, here's how to get the coverage of resources talking about planetary nebulae:

    SELECT ivoid, res_title, coverage
    FROM rr.subject_uat
      NATURAL JOIN rr.stc_spatial
      NATURAL JOIN rr.resource
    WHERE uat_concept='planetary-nebulae'
      AND AREA(coverage)<20
    

    (the rr.subject_uat table is a local extension to RegTAP that will be the subject of some future blog post; you could also use rr.res_subject, but because people still use wildly different keyword schemes – if any –, that wouldn't be as much fun). When plotted, that's the left side of Fig. 3. If you do that yourself, you will notice that the resolution here is about one degree, which is a special property of the sort of MOCs I am proposing for the Registry: They are of order 6. Resolution in MOC goes up with order, doubling with every step. Thus MOCs of order 7 have a resolution of about half a degree, MOCs of order 5 a resolution of about two degrees.

    One possible next step is fetch the intersection of each of these coverages with, say, the DFBS (cf. the post on Byurakan spectra). That would look like this:

    SELECT
      ivoid,
      res_title,
      gavo_mocintersect(coverage, dfbscoverage) as ovrlp
    FROM (
      SELECT ivoid, res_title, coverage
      FROM rr.subject_uat
      NATURAL JOIN rr.stc_spatial
      NATURAL JOIN rr.resource
      WHERE uat_concept='planetary-nebulae'
      AND AREA(coverage)<20) AS others
    CROSS JOIN (
      SELECT coverage AS dfbscoverage
      FROM rr.stc_spatial
      WHERE ivoid='ivo://org.gavo.dc/dfbsspec/q/spectra') AS dfbs
    

    (the DFBS' identifier I got with a quick query on WIRR). This uses the gavo_mocintersect user defined function (UDF), which takes two MOCs and returns a MOC of their common pixels. Which is another important part why MOCs are so cool: together with union and intersection, they form groups. It should not come as a surprise that there is also a gavo_mocunion UDF. The sum aggregate function we've used in our grouping above is (conceptually) built on that.

    Planetary Nebula footprint and plate matches

    Fig. 3: Left: The common footprint of VO resources declaring a subject of planetary-nebula (and declaring a footprint). Right bottom: Heidelberg plates intersecting this, and, in blue, level-6 intersections. Above this, an enlarged detail from this plot.

    You can also convert polygons and circles to MOCs using the (still DaCHS-only) MOC constructor. For instance, you could compute the coverage of all resources dealing with planetary nebulae, filtering against obviously over-eager ones by limiting the total area, and then match that against the coverages of images in, say, the Königstuhl plate achives HDAP. Watch this:

    SELECT
      im.*,
      gavo_mocintersect(MOC(6, im.coverage), pn_coverage) as ovrlp
    FROM (
      SELECT SUM(coverage) AS pn_coverage
      FROM rr.subject_uat
      NATURAL JOIN rr.stc_spatial
      WHERE uat_concept='planetary-nebulae'
      AND AREA(coverage)<20) AS c
    JOIN lsw.plates AS im
    ON 1=INTERSECTS(pn_coverage, MOC(6, coverage))
    

    – so, the MOC(order, geo) function should give you a MOC for other geometries. There are limits to this right now because of limitations of the underlying MOC library; in particular, non-convex polygons are not supported right now, and there are precision issue. We hope this will be rectified soon-ish when we base pgsphere's MOC operations on the CDS HEALPix library. Anyway, the result of this is plotted on the right of Fig. 3.

    Open Ends

    In case you have MOCs from the outside, you can also construct MOCs from literals, which happen to be the ASCII MOCs from the standard. This could look like this:

    SELECT TOP 1
      MOC('4/30-33 38 52 7/324-934') AS ar
    FROM tap_schema.tables
    

    For now, you cannot combine MOCs in CONTAINS and INTERSECTS expressions directly; this is mainly because in such an operation, the machine as to decide on the order of the MOC the other geometries are converted to (and computing the predicates between geometry and MOC directly is really painful). This means that if you have a local table with MOCs in a column cmoc that you want to compare against a polygon-valued column coverage in a remote table like this:

    SELECT db.* FROM
      lsw.plates AS db
      JOIN tap_upload.t6
    ON 1=CONTAINS(coverage, cmoc) -- fails!
    

    you will receive a rather scary message of the type “operator does not exist: spoly <@ smoc”. To fix it (until we've worked out how to reasonably let the computer do that), explicitly convert the polygon:

    SELECT db.* FROM
      lsw.plates AS db
      JOIN tap_upload.t6
    ON 1=CONTAINS(MOC(7, coverage), cmoc)
    

    (be stingy when choosing the order here – MOCs that already exist are fast, but making them at high order is expensive).

    Having said all that: what I've written here is bleeding-edge, and it is not standardised yet. I'd wager, though, that we will see MOCs in ADQL relatively soon, and that what we will see will not be too far from this experiment. Well: Some rough edges, I'd hope, will still be smoothed out.

    Getting This on Your Own DaCHS Installation

    If you are running a DaCHS installation, you can contribute to takeup (and if not, you can stop reading here). To do that, you need to upgrade to DaCHS's latest beta (anything newer than 2.1.4 will do) to have the ADQL extension, and, even more importantly, you need to install the postgresql-postgres package from our release repository (that's version 1.1.4 or newer; in a few weeks, getting it from Debian testing would work as well).

    You will probably not get that automatically, because if you followed our normal installation instructions, you will have a package called postgresql-11-pgsphere installed (apologies for this chaos; as ususal, every single step made sense). The upshot is that with our release repo added, sudo apt install postgresql-pgsphere should give you the new code.

    That's not quite enough, though, because you also need to acquaint the database with the new functions. This can only be done with database administrator privileges, which DaCHS by design does not possess. What DaCHS can do is figure out the commands to do that when it is called as dachs upgrade -e. Have a look at the output, and if you are satisfied it is about what to expect, just pipe it into psql as a superuser; in the default installation, dachsroot would be sufficiently privileged. That is:

    dachs upgrade -e | psql gavo   # as dachsroot
    

    If running:

    select top 1 gavo_mocunion(moc('1/3'), moc('2/9'))
    from tap_schema.tables
    

    through your TAP endpoint returns '1/3 2/9', then all is fine. For entertainment, you might also make sure that gavo_mocintersect(moc('1/3'), moc('2/13')) is 2/13 as expected, and that if you intersect with 2/3 you get back an empty string.

    So – let's bring MOCs to ADQL!

  • Histograms and Hidden Open Clusters

    image: reddish pattern

    Colour-coded histograms for distances of stars in the direction of some NGC open clusters -- one cluster per line, so you're looking a a couple of Gigabytes of data here. If you want this a bit more precise: Read the article and generate your own image.

    I have spent a bit of time last week polishing up what will (hopefully) be the definitive source of common ADQL User Defined Functions (UDFs) for IVOA review. What's a UDF, you ask? Well, it is an extension to ADQL where service operators can invent new functionality. If you have been following this blog for a while, you will probably remember the ivo_healpix_index function from our dereddening exercise (and some earlier postings): That was an UDF, too.

    This polishing work reminded me of a UDF I've wanted to blog about for a quite a while, available in DaCHS (and thus on our Heidelberg Data Center) since mid-2018: gavo_histogram. This, I claim, is a powerful tool for analyses over large amounts of data with rather moderate local means.

    For instance, consider this classic paper on the nature of NGC 2451: What if you were to look for more cases like this, i.e., (indulging in a bit of poetic liberty) open clusters hidden “behind” other open clusters?

    Somewhat more technically this would mean figuring out whether there are “interesting” patterns in the distance and proper motion histograms towards known open clusters. Now, retrieving the dozens of millions of stars that, say, Gaia, has in the direction of open clusters to just build histograms – making each row count for a lot less than one bit – simply is wasteful. This kind of counting and summing is much better done server-side.

    On the other hand, SQL's usual histogram maker, GROUP BY, is a bit unwieldy here, because you have lots of clusters, and you will not see anything if you munge all the histograms together. You could, of course, create a bin index from the distance and then group by this bin and the object name, somewhat like ...ROUND(r_est/20) as bin GROUP by name, bin – but that takes quite a bit of mangling before it can conveniently be used, in particular when you take independent distributions over multiple variables (“naive Bayesian”; but then it's the way to go if you want to capture dependencies between the variables).

    So, gavo_histogram to the rescue. Here's what the server-provided documentation has to say (if you use TOPCAT, you will find this in the ”Service” tab in the TAP windows' ”Use Service” tab):

    gavo_histogram(val REAL, lower REAL, upper REAL, nbins INTEGER) -> INTEGER[]
    
    The aggregate function returns a histogram of val with
    nbins+2 elements. Assuming 0-based arrays, result[0] contains
    the number of underflows (i.e., val<lower), result[nbins+1]
    the number of overflows. Elements 1..nbins are the counts in
    nbins bins of width (upper-lower)/nbins. Clients will have to
    convert back to physical units using some external communication,
    there currently is no (meta-) data as to what lower and upper was in
    the TAP response.
    

    This may sound a bit complicated, but the gist really is: type gavo_histogram(r_est, 0, 2000, 20) as hist, and you will get back an array with 20 bins, roughly 0..100, 100..200, and so on, and two extra bins for under- and overflows.

    Let's try this for our open cluster example. The obvious starting point is selecting the candidate clusters; we are only interested in famous clusters, so we take them from the NGC (if that's too boring for you: with TAP uploads you could take the clusters from Simbad, too), which conveniently sits in my data center as openngc.data:

    select name, raj2000, dej2000, maj_ax_deg
    from openngc.data
    where obj_type='OCl'
    

    Then, we need to add the stars in their rough directions. That's a classic crossmatch, and of course these days we use Gaia as the star catalogue:

    select name, source_id
    from openngc.data
    join gaia.dr2light
    on (
      1=contains(
        point(ra,dec),
        circle(raj2000, dej2000, maj_ax_deg)))
    where obj_type='OCl')
    

    This is now a table of cluster names and Gaia source ids of the candidate stars. To add distances, you could fiddle around with Gaia parallaxes, but because there is a 1/x involved deriving distances, the error model is complicated, and it is much easier and safer to adopt Bailer-Jones et al's pre-computed distances and join them in through source_id.

    And that distance estimation, r_est, is exactly what we want to take our histograms over – which means we have to group by name and use gavo_histogram as an aggregate function:

    with ocl as (
      select name, raj2000, dej2000, maj_ax_deg, source_id
      from openngc.data
      join gaia.dr2light
      on (
        1=contains(
          point(ra,dec),
          circle(raj2000, dej2000, maj_ax_deg)))
      where obj_type='OCl')
    
    select
      name,
      gavo_histogram(r_est, 0, 4000, 200) as hist
    from
      gdr2dist.main
      join ocl
      using (source_id)
    where r_est!='NaN'
    group by name
    

    That's it! This query will give you (admittedly somewhat raw, since we're ignoring the confidence intervals) histograms of the distances of stars in the direction of all NGC open clusters. Of course, it will run a while, as many millions of stars are processed, but TAP async mode easily takes care of that.

    Oh, one odd thing is left to discuss (ignore this paragraph if you don't know what I'm talking about): r_est!='NaN'. That's not quite ADQL but happens to do the isnan of normal programming languages at least when the backend is Postgres: It is true if computations failed and there is an actual NaN in the column. This is uncommon in SQL databases, and normal NULLs wouldn't hurt gavo_histogram. In our distance table, some NaNs slipped through, and they would poison our histograms. So, ADQL wizards probably should know that this is what you do for isnan, and that the usual isnan test val!=val doesn't work in SQL (or at least not with Postgres).

    So, fire up your TOPCAT and run this on the TAP server http://dc.g-vo.org/tap.

    You will get a table with 618 (or so) histograms. At this point, TOPCAT can't do a lot with them. So, let's emigrate to pyVO and save this table in a file ocl.vot

    My visualisation proposition would be: Let's substract a “background” from the histograms (I'm using splines to model that background) and then plot them row by row; multi-peaked rows in the resulting image would be suspicious.

    This is exactly what the programme below does, and the image for this article is a cutout of what the code produces. Set GALLERY = True to see how the histograms and background fits look like (hit 'q' to get to the next one).

    In the resulting image, any two yellow dots in one line are at least suspicious; I've spotted a few, but they are so consipicuous that others must have noticed. Or have they? If you'd like to check a few of them out, feel free to let me know – I think I have a few ideas how to pull some VO tricks to see if these things are real – and if they've been spotted before.

    So, here's the yellow spot programme:

    from astropy.table import Table
    import matplotlib.pyplot as plt
    import numpy
    from scipy.interpolate import UnivariateSpline
    
    GALLERY = False
    
    def substract_background(arr):
        x = range(len(arr))
        mean = sum(arr)/len(arr)
        arr = arr/mean
        background = UnivariateSpline(x, arr, s=100)
        cleaned = arr-background(x)
    
        if GALLERY:
            plt.plot(x, arr)
            plt.plot(x, background(x))
            plt.show()
    
        return cleaned
    
    
    def main():
        tab = Table.read("ocl.vot")
        hist = numpy.array([substract_background(r["hist"][1:-1])
          for r in tab])
        plt.matshow(hist, cmap='gist_heat')
        plt.show()
    
    
    if __name__=="__main__":
        main()
    

« Page 10 / 20 »