Crazy Shapes in TAP

OpenNGC shapes
A complex shape from OpenNGC: MOCs need not be convex, or simply connected, or anything.

So far when you did spherical geometry in ADQL, you had points, circles, and polygons as data types, and you could test for intersection and containment as operations. This feature set is a bit unsatisfying because there are no (algebraic) groups in this picture: When you join or intersect two circles, the result only is a circle if one contains the other. With non-intersecting polygons, you will again not have a (simply connected) spherical polygon in the end.

Enter MOCs (which I’ve mentioned a few times before on this blog): these are essentially arbitrary shapes on the sky, in practice represented through lists of pixels, cleverly done so they can be sufficiently precise and rather compact at the same time. While MOCs are powerful and surprisingly simple in practice, ADQL doesn’t know about them so far, which limits quite a bit what you can do with them. Well, DaCHS would serve them since about 1.3 if you managed to push them into the database, but there were no operations you could do on them.

Thanks to work done by credativ (who were really nice to work with), funded with some money we had left from our previous e-inf-astro project (BMBF FKZ 05A17VH2) on the pgsphere database extension, this has now changed. At least on the GAVO data center, MOCs are now essentially first-class citizens that you can create, join, and intersect within ADQL, and you can retrieve the results. All operators of DaCHS services are just a few updates away from being able to offer the same.

So, what can you do? To follow what’s below, get a sufficiently new TOPCAT (4.7 will do) and open its TAP client on http://dc.g-vo.org/tap (a.k.a. GAVO DC TAP).

Basic MOC Operations in TAP

First, let’s make sure you can plot MOCs; run

SELECT name, deepest_shape
FROM openngc.shapes

Then do Graphics/Sky Plot, and in the window that pops up then, Layers/Add Area Control. Then select your new table in the Position tab, and finally choose deepest_shape as area (yeah, this could become a bit more automatic and probably will over time). You will then see the footprints of a few NGC objects (OpenNGC’s author Mattia Verga hasn’t done all yet; he certainly welcomes help on OpenNGC’s version control repo), and you can move around in the plot, yielding perhaps something like Fig. 1.

Now let’s color these shapes by object class. If you look, openngc.data has an obj_type column – let’s group on it:

SELECT
  obj_type,
  shape,
  AREA(shape) AS ar
FROM (
  SELECT obj_type, SUM(deepest_shape) AS shape
  FROM openngc.shapes
  NATURAL JOIN openngc.data
  GROUP BY obj_type) AS q

(the extra subquery is a workaround necessary because the area function wants a geometry or a column reference, and ADQL doesn’t allow aggregate functions – like sum – as either of these).

Coloured shapes
Fig. 2: OpenNGC shapes grouped and coloured by type.

In the result you will see that so far, contours for about 40 square degrees of star clusters with nebulae have been put in, but only 0.003 square degrees of stellar associations. And you can now plot by the areas covered by the various sorts of objects; in Fig. 2, I’ve used Subsets/Classify by Column in TOPCAT’s Row Subsets to have colours indicate the different object types – a great workaround when one deals with categorial variables in TOPCAT.

MOCs and JOINs

Another table that already has MOCs in them is rr.stc_spatial, which has the coverage of VO resources (and is the deeper reason I’ve been pushing improved MOC support in pgsphere – background); this isn’t available for all resources yet , but at least there are about 16000 in already. For instance, here’s how to get the coverage of resources talking about planetary nebulae:

SELECT ivoid, res_title, coverage
FROM rr.subject_uat
  NATURAL JOIN rr.stc_spatial
  NATURAL JOIN rr.resource
WHERE uat_concept='planetary-nebulae'
  AND AREA(coverage)<20

(the rr.subject_uat table is a local extension to RegTAP that will be the subject of some future blog post; you could also use rr.res_subject, but because people still use wildly different keyword schemes – if any –, that wouldn’t be as much fun). When plotted, that’s the left side of Fig. 3. If you do that yourself, you will notice that the resolution here is about one degree, which is a special property of the sort of MOCs I am proposing for the Registry: They are of order 6. Resolution in MOC goes up with order, doubling with every step. Thus MOCs of order 7 have a resolution of about half a degree, MOCs of order 5 a resolution of about two degrees.

One possible next step is fetch the intersection of each of these coverages with, say, the DFBS (cf. the post on Byurakan spectra). That would look like this:

SELECT 
  ivoid, 
  res_title,
  gavo_mocintersect(coverage, dfbscoverage) as ovrlp
FROM (
  SELECT ivoid, res_title, coverage
  FROM rr.subject_uat
  NATURAL JOIN rr.stc_spatial
  NATURAL JOIN rr.resource
  WHERE uat_concept='planetary-nebulae'
  AND AREA(coverage)<20) AS others
CROSS JOIN (
  SELECT coverage AS dfbscoverage 
  FROM rr.stc_spatial
  WHERE ivoid='ivo://org.gavo.dc/dfbsspec/q/spectra') AS dfbs

(the DFBS’ identifier I got with a quick query on WIRR). This uses the gavo_mocintersect user defined function (UDF), which takes two MOCs and returns a MOC of their common pixels. Which is another important part why MOCs are so cool: together with union and intersection, they form groups. It should not come as a surprise that there is also a gavo_mocunion UDF. The sum aggregate function we’ve used in our grouping above is (conceptually) built on that.

Planetary Nebula footprint and plate matches
Fig. 3: Left: The common footprint of VO resources declaring a subject of planetary-nebula (and declaring a footprint). Right bottom: Heidelberg plates intersecting this, and, in blue, level-6 intersections. Above this, an enlarged detail from this plot.

You can also convert polygons and circles to MOCs using the (still DaCHS-only) MOC constructor. For instance, you could compute the coverage of all resources dealing with planetary nebulae, filtering against obviously over-eager ones by limiting the total area, and then match that against the coverages of images in, say, the Königstuhl plate achives HDAP. Watch this:

SELECT 
  im.*,
  gavo_mocintersect(MOC(6, im.coverage), pn_coverage) as ovrlp
FROM (
  SELECT SUM(coverage) AS pn_coverage
  FROM rr.subject_uat
  NATURAL JOIN rr.stc_spatial
  WHERE uat_concept='planetary-nebulae'
  AND AREA(coverage)<20) AS c
JOIN lsw.plates AS im
ON 1=INTERSECTS(pn_coverage, MOC(6, coverage))

– so, the MOC(order, geo) function should give you a MOC for other geometries. There are limits to this right now because of limitations of the underlying MOC library; in particular, non-convex polygons are not supported right now, and there are precision issue. We hope this will be rectified soon-ish when we base pgsphere’s MOC operations on the CDS HEALPix library. Anyway, the result of this is plotted on the right of Fig. 3.

Open Ends

In case you have MOCs from the outside, you can also construct MOCs from literals, which happen to be the ASCII MOCs from the standard. This could look like this:

SELECT TOP 1 
  MOC('4/30-33 38 52 7/324-934') AS ar 
FROM tap_schema.tables

For now, you cannot combine MOCs in CONTAINS and INTERSECTS expressions directly; this is mainly because in such an operation, the machine as to decide on the order of the MOC the other geometries are converted to (and computing the predicates between geometry and MOC directly is really painful). This means that if you have a local table with MOCs in a column cmoc that you want to compare against a polygon-valued column coverage in a remote table like this:

SELECT db.* FROM 
  lsw.plates AS db
  JOIN tap_upload.t6
ON 1=CONTAINS(coverage, cmoc) -- fails!

you will receive a rather scary message of the type “operator does not exist: spoly <@ smoc”. To fix it (until we’ve worked out how to reasonably let the computer do that), explicitly convert the polygon:

SELECT db.* FROM 
  lsw.plates AS db
  JOIN tap_upload.t6
ON 1=CONTAINS(MOC(7, coverage), cmoc)

(be stingy when choosing the order here – MOCs that already exist are fast, but making them at high order is expensive).

Having said all that: what I’ve written here is bleeding-edge, and it is not standardised yet. I’d wager, though, that we will see MOCs in ADQL relatively soon, and that what we will see will not be too far from this experiment. Well: Some rough edges, I’d hope, will still be smoothed out.

Getting This on Your Own DaCHS Installation

If you are running a DaCHS installation, you can contribute to takeup (and if not, you can stop reading here). To do that, you need to upgrade to DaCHS’s latest beta (anything newer than 2.1.4 will do) to have the ADQL extension, and, even more importantly, you need to install the postgresql-postgres package from our release repository (that’s version 1.1.4 or newer; in a few weeks, getting it from Debian testing would work as well).

You will probably not get that automatically, because if you followed our normal installation instructions, you will have a package called postgresql-11-pgsphere installed (apologies for this chaos; as ususal, every single step made sense). The upshot is that with our release repo added, sudo apt install postgresql-pgsphere should give you the new code.

That’s not quite enough, though, because you also need to acquaint the database with the new functions. This can only be done with database administrator privileges, which DaCHS by design does not possess. What DaCHS can do is figure out the commands to do that when it is called as dachs upgrade -e. Have a look at the output, and if you are satisfied it is about what to expect, just pipe it into psql as a superuser; in the default installation, dachsroot would be sufficiently privileged. That is:

dachs upgrade -e | psql gavo   # as dachsroot

If running

select top 1 gavo_mocunion(moc('1/3'), moc('2/9')) 
from tap_schema.tables

through your TAP endpoint returns ‘1/3 2/9’, then all is fine. For entertainment, you might also make sure that gavo_mocintersect(moc('1/3'), moc('2/13')) is 2/13 as expected, and that if you intersect with 2/3 you get back an empty string.

So – let’s bring MOCs to ADQL!

Histograms and Hidden Open Clusters

[image: reddish pattern]
Colour-coded histograms for distances of stars in the direction of some NGC open clusters — one cluster per line, so you’re looking a a couple of Gigabytes of data here. If you want this a bit more precise: Read the article and generate your own image.

I have spent a bit of time last week polishing up what will (hopefully) be the definitive source of common ADQL User Defined Functions (UDFs) for IVOA review. What’s a UDF, you ask? Well, it is an extension to ADQL where service operators can invent new functionality. If you have been following this blog for a while, you will probably remember the ivo_healpix_index function from our dereddening exercise (and some earlier postings): That was an UDF, too.

This polishing work reminded me of a UDF I’ve wanted to blog about for a quite a while, available in DaCHS (and thus on our Heidelberg Data Center) since mid-2018: gavo_histogram. This, I claim, is a powerful tool for analyses over large amounts of data with rather moderate local means.

For instance, consider this classic paper on the nature of NGC 2451: What if you were to look for more cases like this, i.e., (indulging in a bit of poetic liberty) open clusters hidden “behind” other open clusters?

Somewhat more technically this would mean figuring out whether there are “interesting” patterns in the distance and proper motion histograms towards known open clusters. Now, retrieving the dozens of millions of stars that, say, Gaia, has in the direction of open clusters to just build histograms – making each row count for a lot less than one bit – simply is wasteful. This kind of counting and summing is much better done server-side.

On the other hand, SQL’s usual histogram maker, GROUP BY, is a bit unwieldy here, because you have lots of clusters, and you will not see anything if you munge all the histograms together. You could, of course, create a bin index from the distance and then group by this bin and the object name, somewhat like ...ROUND(r_est/20) as bin GROUP by name, bin – but that takes quite a bit of mangling before it can conveniently be used, in particular when you take independent distributions over multiple variables (“naive Bayesian”; but then it’s the way to go if you want to capture dependencies between the variables).

So, gavo_histogram to the rescue. Here’s what the server-provided documentation has to say (if you use TOPCAT, you will find this in the ”Service” tab in the TAP windows’ ”Use Service” tab):

gavo_histogram(val REAL, lower REAL, upper REAL, nbins INTEGER) -> INTEGER[]

The aggregate function returns a histogram of val with nbins+2 elements. Assuming 0-based arrays, result[0] contains the number of underflows (i.e., val<lower), result[nbins+1] the number of overflows. Elements 1..nbins are the counts in nbins bins of width (upper-lower)/nbins. Clients will have to convert back to physical units using some external communication, there currently is no (meta-) data as to what lower and upper was in the TAP response.

This may sound a bit complicated, but the gist really is: type gavo_histogram(r_est, 0, 2000, 20) as hist, and you will get back an array with 20 bins, roughly 0..100, 100..200, and so on, and two extra bins for under- and overflows.

Let’s try this for our open cluster example. The obvious starting point is selecting the candidate clusters; we are only interested in famous clusters, so we take them from the NGC (if that’s too boring for you: with TAP uploads you could take the clusters from Simbad, too), which conveniently sits in my data center as openngc.data:

select name, raj2000, dej2000, maj_ax_deg
from openngc.data
where obj_type='OCl'

Then, we need to add the stars in their rough directions. That’s a classic crossmatch, and of course these days we use Gaia as the star catalogue:

  select name, source_id
  from openngc.data 
  join gaia.dr2light
  on (
    1=contains(
      point(ra,dec),
      circle(raj2000, dej2000, maj_ax_deg)))
  where obj_type='OCl')

This is now a table of cluster names and Gaia source ids of the candidate stars. To add distances, you could fiddle around with Gaia parallaxes, but because there is a 1/x involved deriving distances, the error model is complicated, and it is much easier and safer to adopt Bailer-Jones et al’s pre-computed distances and join them in through source_id.

And that distance estimation, r_est, is exactly what we want to take our histograms over – which means we have to group by name and use gavo_histogram as an aggregate function:

with ocl as (
  select name, raj2000, dej2000, maj_ax_deg, source_id
  from openngc.data 
  join gaia.dr2light
  on (
    1=contains(
      point(ra,dec),
      circle(raj2000, dej2000, maj_ax_deg)))
  where obj_type='OCl')

select
  name,
  gavo_histogram(r_est, 0, 4000, 200) as hist
from
  gdr2dist.main
  join ocl
  using (source_id)
where r_est!='NaN'
group by name

That’s it! This query will give you (admittedly somewhat raw, since we’re ignoring the confidence intervals) histograms of the distances of stars in the direction of all NGC open clusters. Of course, it will run a while, as many millions of stars are processed, but TAP async mode easily takes care of that.

Oh, one odd thing is left to discuss (ignore this paragraph if you don’t know what I’m talking about): r_est!='NaN'. That’s not quite ADQL but happens to do the isnan of normal programming languages at least when the backend is Postgres: It is true if computations failed and there is an actual NaN in the column. This is uncommon in SQL databases, and normal NULLs wouldn’t hurt gavo_histogram. In our distance table, some NaNs slipped through, and they would poison our histograms. So, ADQL wizards probably should know that this is what you do for isnan, and that the usual isnan test val!=val doesn’t work in SQL (or at least not with Postgres).

So, fire up your TOPCAT and run this on the TAP server http://dc.g-vo.org/tap.

You will get a table with 618 (or so) histograms. At this point, TOPCAT can’t do a lot with them. So, let’s emigrate to pyVO and save this table in a file ocl.vot

My visualisation proposition would be: Let’s substract a “background” from the histograms (I’m using splines to model that background) and then plot them row by row; multi-peaked rows in the resulting image would be suspicious.

This is exactly what the programme below does, and the image for this article is a cutout of what the code produces. Set GALLERY = True to see how the histograms and background fits look like (hit ‘q’ to get to the next one).

In the resulting image, any two yellow dots in one line are at least suspicious; I’ve spotted a few, but they are so consipicuous that others must have noticed. Or have they? If you’d like to check a few of them out, feel free to let me know – I think I have a few ideas how to pull some VO tricks to see if these things are real – and if they’ve been spotted before.

So, here’s the yellow spot programme:

from astropy.table import Table
import matplotlib.pyplot as plt
import numpy
from scipy.interpolate import UnivariateSpline

GALLERY = False

def substract_background(arr):
    x = range(len(arr))
    mean = sum(arr)/len(arr)
    arr = arr/mean
    background = UnivariateSpline(x, arr, s=100)
    cleaned = arr-background(x)

    if GALLERY:
        plt.plot(x, arr)
        plt.plot(x, background(x))
        plt.show()

    return cleaned


def main():
    tab = Table.read("ocl.vot")
    hist = numpy.array([substract_background(r["hist"][1:-1])
      for r in tab])
    plt.matshow(hist, cmap='gist_heat')
    plt.show()
    

if __name__=="__main__":
    main()

The Bochum Galactic Disk Survey

[Image: Patches of higher perceived variability on the Sky]
Fig 1: How our haphazard variability ratio varies over the sky (galactic coordinates). And yes, it’s clear that this isn’t dominated by physical variability.

About a year ago, I reported on a workshop on “Large Surveys with Small Telescopes” in Bamberg; at around the same time, I’ve published an example for those, the Bochum Galactic Disk Survey BGDS, which used a twin 15 cm robotic telescope in some no longer forsaken place in the Andes mountains to monitor the brighter stars in the southern Milky Way. While some tables from an early phase of the survey have been on VizieR for a while, we now publish the source images (also in SIAP and Obscore), the mean photometry (via SCS and TAP) and, perhaps potentially most fun of all, the the lightcurves (via SSAP and TAP) – a whopping 35 million of the latter.

This means that in tools like Aladin, you can now find such light curves (and images in two bands from a lot of epochs) when you are in the survey’s coverage, and you can run TAP queries on GAVO’s http://dc.g-vo.org/tap server against the full photometry table and the time series.

Regular readers of this blog will not be surprised to see me use this as an excuse to show off a bit of ADQL trickery.

If you have a look at the bgds.phot_all table in your favourite TAP client, you’ll see that it has a column amp, giving the difference between the highest and lowest magnitude. The trouble is that amp for almost all objects just reflects the measurement error rather than any intrinsic variability. To get an idea what’s “normal” (based on the fact that essentially all stars have essentially constant luminosity on the range and resolution scales considered here), run a query like

SELECT ROUND(amp/err_mag*10)/10 AS bin, COUNT(*) AS n
FROM bgds.phot_all
WHERE nobs>10
GROUP BY bin

As this scans the entire 75 million rows of the table, you will probably have to use async mode to run this.

[image: distribution of amplitude/mag error
Figure 2: The distribution of amplitude over magnitude error for all BGDS objects with nobs>10 (blue) and the subset with a mean magnitude brighter than 15 (blue).

When it comes back, you will have, for objects where any sort of statistics make sense at all (hence nobs>10), a histogram (of sorts) of the amplitude in units of upstream’s magnitude error estimation. If you log-log-plot this, you’ll see something like Figure 2. The curve at least tells you that the magnitude error estimate is not very far off – the peak at about 3 “sigma” is not unreasonable since about half of the objects have nobs of the order of a hundred and thus would likely contain outliers that far out assuming roughly Gaussian errors.

And if you’re doing a rough cutoff at amp/magerr>10, you will get perhaps not necessarily true variables, but, at least potentially interesting objects.

Let’s use this insight to see if we spot any pattern in the distribution of these interesting objects. We’ll use the HEALPix technique I’ve discussed three years ago in this blog, but with a little twist from ADQL 2.1: The Common Table Expressions or CTEs I have already mentioned in my blog post on ADQL 2.1 and then advertised in the piece on the Henry Draper catalogue. The brief idea, again, is that you can write queries and give them a name that you can use elsewhere in the query as if it were an actual table. It’s not much different from normal subqueries, but you can re-use CTEs in multiple places in the query (hence the “common”), and it’s usually more readable.

Here, we first create a version of the photometry table that contains HEALPixes and our variability measure, use that to compute two unsophisticated per-HEALPix statistics and eventually join these two to our observable, the ratio of suspected variables to all stars observed (the multiplication with 1.0 is a cheap way to make a float out of a value, which is necessary here because a/b does integer division in ADQL if a and b are both integers):

WITH photpoints AS (
  SELECT 
    amp/err_mag AS redamp,
    amp,
    ivo_healpix_index(5, ra, dec) AS hpx
  FROM bgds.phot_all
  WHERE 
    nobs>10
    AND band_name='SDSS i'
    AND mean_mag<16),
all_objs AS (
  SELECT count(*) AS ct,
    hpx
    FROM photpoints GROUP BY hpx),
strong_var AS (
  SELECT COUNT(*) AS ct,
    hpx
    FROM photpoints
    WHERE redamp>4 AND amp>1 GROUP BY hpx)
SELECT
  strong_var.ct/(1.0*all_objs.ct) AS obs,
  all_objs.ct AS n,
  hpx
FROM strong_var JOIN all_objs USING (hpx)
WHERE all_objs.ct>20

If you plot this using TOPCAT’s HEALPix thingy and ask it to use Galactic coordinates, you’ll end up with something like Figure 1.

There clearly is some structure, but given that the variables ratio reaches up to 0.2, this is still reflecting instrumental or pipeline effects and thus earthly rather than Astrophysics. And that’s going beyond what I’d like to talk about on a VO blog, although I’l take any bet that you will see significant structure in the spatial distribution of the variability ratio at about any magnitude cutoff, since there are a lot of different population mixtures in the survey’s footprint.

Be that as it may, let’s have a quick look at the time series. As with the short spectra from Byurakan use case, we’ve stored the actual time series as arrays in the database (the mjd and mags columns in bgds.ssa_time_series. Unfortunately, since they are a lot less array-like than homogeneous spectra, it’s also a lot harder to do interesting things with them without downloading them (I’m grateful for ideas for ADQL functions that will let you do in-DB analysis for such things). Still, you can at least easily download them in bulk and then process them in, say, python to your heart’s content. The Byurakan use case should give you a head start there.

For a quick demo, I couldn’t resist checking out objects that Simbad classifies as possible long-period variables (you see, as I write this, the public bohei over Betelgeuse’s brief waning is just dying down), and so I queried Simbad for:

SELECT ra, dec, main_id
FROM basic
WHERE
  otype='LP?'
  AND 1=CONTAINS(
     POINT('', ra, dec),
     POLYGON('', 127, -30, 112, -30, 272, -30, 258, -30))

(as of this writing, Simbad still needs the ADQL 2.0-compliant first arguments to POINT and POLYGON), where the POLYGON is intended to give the survey’s footprint. I obtained that by reading off the coordinates of the corners in my Figure 1 while it was still in TOPCAT. Oh, and I had to shrink it a bit because Simbad (well, the underlying Postgres server, and, more precisely, its pg_sphere extension) doesn’t want polygons with edges longer than π. This will soon become less pedestrian: MOCs in relational databases are coming; more on this soon.

[TOPCAT action shot with a light curve display]
Fig 3: V566 Pup’s BGDS lightcuve in a TOPCAT configured to auto-plot the light curves associated with a row from the bgds.ssa_time_series table on the GAVO DC TAP service.

If you now do the usual spiel with an upload crossmatch to the bgds.ssa_time_series table and check “Plot Table” in Views/Activation Action, you can quickly page through the light curves (TOPCAT will keep the plot style as you go from dataset to dataset, so it’s worth configuring the lines and the error bars). Which could bring you to something like Fig. 3; and that would suggest that V* V566 Pup isn’t really long-period unless the errors are grossly off.

Parallel Queries

Image: Plot of run times
An experiment with parallel querying of PPMX, going from single-threaded execution to using seven workers.

Let me start this post with a TL;DR for

scientists
Large analysis queries (like those that contain a GROUP BY clause) profit a lot from parallel execution, and you needn’t do a thing for that.
DaCHS operators
When you have large tables, Postgres 11 together with the next DaCHS release may speed up your responses quite dramatically in some cases.

So, here’s the story –

I’ve finally overcome my stretch trauma and upgraded the Heidelberg data center’s database server to Debian buster. With that, I got Postgres 11, and I finally bothered to look into what it takes to enable parallel execution of database queries.

Turns out: My Postgres started to do parallel execution right away, but just in case, I went for the following lines in postgresql.conf:

max_parallel_workers_per_gather = 4
max_worker_processes = 10
max_parallel_workers = 10

Don’t quote me on this – I frankly admit I haven’t really developed a feeling for the consequences of max_parallel_workers_per_gather and instead just did some experiments while the box was loaded otherwise, determining where raising that number has a diminishing return (see below for more on this).

The max_worker_processes thing, on the other hand, is an educated guess: on my data center, there’s essentially never more than one person at a time who’s running “interesting”, long-running queries (i.e., async), and that person should get the majority of the execution units (the box has 8 physical CPUs that look like 16 cores due to hyperthreading) because all other operations are just peanuts in comparison. I’ll gladly accept advice to the effect that that guess isn’t that educated after all.

Of course, that wasn’t nearly enough. You see, since TAP queries can return rather large result sets – on the GAVO data center, the match limit is 16 million rows, which for a moderate row size of 2 kB already translates to 32 GB of memory use if pulled in at once, half the physical memory of that box –, DaCHS uses cursors (if you’re a psycopg2 person: named cursors) to stream results and write them out to disk as they come in.

Sadly, postgres won’t do parallel plans if it thinks people will discard a large part of the result anyway, and it thinks that if you’re coming through a cursor. So, in SVN revision 7370 of DaCHS (and I’m not sure if I’ll release that in this form), I’m introducing a horrible hack that, right now, just checks if there’s a literal “group” in the query and doesn’t use a cursor if so. The logic is, roughly: With GROUP, the result set probably isn’t all that large, so streaming isn’t that important. At the same time, this type of query is probably going to profit from parallel execution much more than your boring sequential scan.

This gives rather impressive speed gains. Consider this example (of course, it’s selected to be extreme):

import contextlib
import pyvo
import time

@contextlib.contextmanager
def timeit(activity):
  start_time = time.time()
  yield
  end_time = time.time()
  print("Time spent on {}: {} s".format(activity, end_time-start_time))


svc = pyvo.tap.TAPService("http://dc.g-vo.org/tap")
with timeit("Cold (?) run"):
  svc.run_sync("select round(Rmag) as bin, count(*) as n"
    " from ppmx.data group by bin")
with timeit("Warm run"):
  svc.run_sync("select round(Rmag) as bin, count(*) as n"
    " from ppmx.data group by bin")

(if you run it yourself and you get warnings about VOTable versions from astropy, ignore them; I’m right and astropy is wrong).

Before enabling parallel execution, this was 14.5 seconds on a warm run, after, it was 2.5 seconds. That’s an almost than a 6-fold speedup. Nice!

Indeed, that holds beyond toy examples. The showcase Gaia density plot,

SELECT
        count(*) AS obs,
        source_id/140737488355328 AS hpx
FROM gaia.dr2light
GROUP BY hpx

(the long odd number is 235416-6, which turns source_ids into level 6-HEALPixes as per Gaia footnote id; please note that Postgres right now isn’t smart enough to parallelise ivo_healpix), which traditionally ran for about an hour is now done in less than 10 minutes.

In case you’d like to try things out on your postgres, here’s what I’ve done to establish the max_parallel_workers_per_gather value above.

  1. Find a table with a few 1e7 rows. Think of a query that will return a small result set in order to not confuse . In my case, that’s a magnitude histogram, and the query would be
    select round(Rmag) as bin, count(*) 
    as n from ppmx.data 
    group by bin;
    

    Run this query once so the data is in the disk cache (the query is “warm”).

  2. Establish a non-parallel baseline. That’s easy to do:
    set max_parallel_workers_per_gather=0;
    
  3. Then run
    explain analyze select round(Rmag) as bin, count(*) as n from ppmx.data group by bin;
    

    You should see a simple query plan with the runtime for the non-parallel execution – in my case, a bit more than 12 seconds.

  4. Then raise the number of max_parallel_workers_per_gatherer successively. Make sure the query plan has lines of the form “Workers Planned” or so. You should see that the execution time falls with the number of workers you give it, up to the value of max_worker_processes – or until postgres decides your table is too small to warrant further parallelisation, which for my settings happened at 7.

Note, though, that in realistic, more complex queries, there will probably be multiple operations that will profit from parallelisation in a single query. So, if in this trivial example you can go to 15 gatherers and still see an improvement, this could actually make things slower for complex queries. But as I said above: I have no instinct yet for how things will actually work out. If you have experiences to share: I’m sure I’m not the only person on dachs-users who’t be interested.

LAMOST5 meets Datalink

One of the busiest spectral survey instruments operated right now is the Large Sky Area Multi-Object Fiber Spectrograph Telescope (LAMOST). And its data in the VO, more or less: DR2 and DR3 have been brought into the VO by our Czech colleagues, but since they currently lack resources to update their services to the latest releases, they have kindly given me their DaCHS resource descriptor, and so I had a head start for publishing DR5 in Heidelberg.

With some minor updates, here it is now: Over nine million medium-resolution spectra covering large parts of the northen sky – the spatial coverage is like this:

[Coverage Healpix map]

There’s lots of fun to be had with this; of course, there’s an SSA service, so when you point Aladin or Splat at some part of the covered sky and look for spectra, chances are you’ll see LAMOST spectra, and when working on some of our tutorials (this one, for example), it happened that LAMOST actually had what I was looking for when writing them.

But I’d like to use the opportunity to mention two other modes of accessing the data.

Tablesample and TOPCAT’s Plot Table activation action

Say you’d like to look at spectra of M stars and would like to have some sample from across the sky, fire up TOPCAT, point its TAP client the GAVO DC TAP service (http://dc.g-vo.org/tap) and run something like

select 
  ssa_pubDID, accref, raj2000, dej2000, ssa_targsubclass
from lamost5.data tablesample(1)
where 
  ssa_targsubclass like 'M%' 

Image: stacked spectra

This is using the TABLESAMPLE modifier in the from clause, which isn’t standard ADQL yet. As mentioned in the DaCHS 1.4 announcement, DaCHS has a prototype implementation of what’s been discussed on the IVOA’s DAL mailing list: pick a part of a table rather than the full one. It takes a percentage as an argument, and tells the server to choose about this percentage of the table’s records using a reasonable and fast heuristic. Note that this won’t give you perfect statistical sampling, but if it’s not “good enough” for some purpose, I’d like to learn about that purpose.

Drawing a proper statistical sample, on the other hand, would take minutes on the GAVO database server – with tablesample, I had the roughly 6000 spectra the above query returns essentially instantaneously, and from eyeballing a sky plot of them, I’d say their distribution is close enough to that of the full DR5. So: tablesample is your friend.

For a quick look at the spectra themselves, in TOPCAT click Views/Activation Actions, check “Plot Table” and make sure TOPCAT proposes the accref column as “Table Location” (if you don’t see these items, update your TOPCAT – it’s worth it). Now click on a row or perhaps a dot on a plot and behold an M spectrum.

Cutouts via Datalink

LAMOST releases spectra in FITS format pretty much like the ones you may know from SDSS. The trick above works because we instead hand out proper, IVOA Spectral Data Model-compliant spectra through SSA and TAP. However, if you need to go back to the original files, you can, using Datalink. If you’re unsure what this Datalink thing is: call me vain, but I still like my 2015 ADASS poster explaining that. In TOPCAT, you’d be using the “Invoke Service” activation action to get to the datalinks.

If you have actual work to do, offloading repetetive work to the computer is what you want, and fortunately, pyVO knows about datalink, too. I give you this is hard to discover so far, and the interface is… a tiny bit clunky. Until some kind soul cleans up the pyVO datalink act, a poster Stefan and I showed at the 2017 ADASS might give you an idea which buttons to press. Or read on and see how things work for LAMOST5.

The shortest way to datalinks is a TAP query that at least retrieves the ssa_pubdid column (that’s a must; Datalink can’t work without it) and, on the result, run the iter_datalinks method. This returns an object in which you can find the associated data items (in this case, a preview and the original FITS with the #progenitor semantics), plus the cutout service.

Hence, a minimal example for pulling the legacy FITS links out of the first three items in lamost5.data would look like this:

import pyvo

svc = pyvo.dal.TAPService("http://dc.g-vo.org/tap")
for dl in svc.run_sync("select top 3 ssa_pubdid"
        " from lamost5.data").iter_datalinks():
    print(next(dl.bysemantics("#progenitor")
        )["access_url"].decode("ascii"))

This is a bit different from listing 2 in the poster linked above because it’s python3, so getting the first element from iterator an iterator looks a bit different, and (curse astropy.votable for returning VOTable chars as bytes rather than strings!) you’ll want to turn the URL into a proper string manually.

Another, actually more interesting, thing you can do with Datalink is cut out regions of interest. The LAMOST spectra are fairly long (though of course still small by image standards), so if you’re only interested in a single line, you can save a bit of storage and bandwidth over blindly pulling the whole thing.

For instance, if you wanted to pull the vicinity of the H and K Fraunhofer lines from the matches in the loop in the snippet above, you could say:

from astropy import units as u
proc = next(dl.iter_procs())
cutout = proc.processed(band=(392*u.nm,398*u.nm))

And this is what I’ve done for the decorative left border above: it’s the H and K line profiles for 0.1% of the stars LAMOST has classified as G8. Building the image didn’t take more than a few seconds (where I’d like the cutouts to be faster by a factor of 10; I guess that’s about an afternoon of work for me, so if it’d save you more than that afternoon, poke me to do it).

What’s coming back is tables. By the time python has digested these, they’re numpy record arrays. Thus, you can immediately bring in your beloved scipy (or whatever). For instance, if for some reason you’re convinced that the H and K lines should be fit by identical Gaussians in the boring case and would like find objects for which that’s patently untrue and that hence could be un-boring, here’s how you could do that:

def spectral_model(wl, c1, c2, depth, width):
    return (1
        -depth*numpy.exp(-numpy.square(wl-c1)
            /numpy.square(width))
        -depth*numpy.exp(-numpy.square(wl-c2)
            /numpy.square(width)))

for pubdid, prof in get_profiles(
        "G8", (392*u.nm,398*u.nm), 0.01, 4):
    prof["flux"] /= max(prof["flux"])
    popt, pcov = curve_fit(
        spectral_model, prof["spectral"], prof["flux"],
        sigma=prof["flux_error"],
        p0=[3968, 3934, 1, 1])
    if pcov[3][3]>1:
        break

– where get_profiles is essentially doing the TAP plus datalink routine above, except I’m swallowing spectra with too much noise and I have the function transform the spectral coordinate into the objects’ rest frames. If you’re curious how I’m doing this just based on the IVOA Spectral Data Model, check the source linked at the bottom of this post.

I’ve just run this, and the first spectrum that the machinery flagged as suspicious was this:

Image: A fairly boring late G spectrum

– which doesn’t look like I’ve made a discovery just yet. But that doesn’t mean there’s not a lot to find within LAMOST5’s lines…

To get you up to speed quickly: here’s the actual python3 code I ran for the “analysis” and the plot.

DaCHS 1.4 is out

Dachs logo with "version 1.4" superposed

Since the Groningen Interop is over, it’s time for a DaCHS release, and so, roughly half a year after the release of DaCHS 1.3, today I’ve pushed DaCHS 1.4 into our Debian repository.

As usual, you should upgrade as soon as you find time to do so, because upgrades become more difficult if they span large version gaps; and one of these days you will need some new feature or run into one of the odd bugs. Upgrading is a good opportunity to also get your DaCHS ready for buster by adding the repos mentioned there.

The list of new features is rather short this time around. Here are some noteworthy ones:

  • There’s now an XML grammar that can be used when you have to parse smallish snippets of XML as, for instance, in VOEvent.
  • You can now use TABLESAMPLE(1) after a table specification in DaCHS’ ADQL to tell the database engine to just use 1% of a table for a query. While this isn’t a precise way to sample tables, it’s great when developing queries.
  • Also among new features I’d like to see in ADQL and have therefore put into DaCHS is GENERATE_SERIES(a,b), which is what is known as table-generating function in SQL . If you know SDSS CasJobs, you’ll have seen lots of those already. GENERATE_SERIES, however, is really plain: it just spits out a table with a column with integers between a and b. For an example of why one might what to have that, check out the poster I’m linking to in my ADASS report.
  • If you have an updating data descriptor (usually, because you keep feeding data into a data collection), DaCHS will no longer automatically re-make its dependencies (like, say, views). That’s because that’s not necessary in general, and it’s a pain if every update on an obscore-published table tears down and rebuilds the obscore view. For the rare cases when you do need to rebuild dependencies, there’s now a remakeOnDataChange attribute on data.
  • At the interop, I’ve mentioned a few use cases for knowing which server software you’re talking to, and I’ve said that people should set their server headers to informative values. DaCHS does that now.

To conclude on a low note: This is probably going to be the last release of DaCHS for python 2. Even though we will have to shed a dependency or two that simply will not be ported to python 3, and even though I’m rather unhappy with a few properties of the python 3 port of twisted, there’s probably no way to escape this, given that Debian is purging out python 2 packages quickly already.

So, when we meet again for the next release, you’ll probably be looking at DaCHS 2.0, and where you have custom code in your RDs, it’s rather likely that you’ll see a minor amount of breakage. I promise I’ll do everything I can to make the migration easy for deployers, but I can’t do higher magic, so: If there’s ever been a time to add regression tests to your RDs, it’s now.

ADASS and Interop

[ADASS group photo]
ADASS XXIX is a big conference with lots of attendants. I’ve taken the liberty of scaling the photo so you really won’t recognise me (though I am on the photo). Note that, regrettably, the interop will be a lot smaller.

The people that create the Virtual Observatory standards, organised in the IVOA, meet twice a year: Once in spring for a five-day meeting (this year it happened in Paris), and once in autumn for a three-day meeting back-to-back to ADASS, the venerable (this year it’s the 29th installment) meeting of people dealing with astronomy and computers.

We’re now on day three of ADASS, and for me, so far this has been more or an endless hackathon, with discussing and hacking on things like mirrors for DFBS, ADQL 2.1, the evolution of IVOA vocabularies (more on this soon somewhere around here), a vocabulary of object types, getting LAMOST 5 published properly in the VO, the measurements data model, convincing more registries to push out space-time coverage for their resources (I’m showing a poster on that), and a lot more.

So, getting to actually listen to talks during ADASS almost is something of a luxury, and a mind-widening at that – I’ve just listend to a talk about effectively doubling the precision of VLBI geodesy (in this case, measuring the location of radio telescopes to a few millimeters) by a piece of clever software, and before that I could learn a bit about how complex it is to figure out how much interference something emitting radio waves will cause in some other place on earth (like, well, a radio telescope). In case you’re curious: A bit more than a year from now, short papers on the topics will appear in the proceedings of ADASS XXIX, which in turn you’ll find in the ADASS proceedings collections (or on arXiv before that).

Given the experience of the last few days, I doubt I’ll do anything like the live blog from Paris linked above. I still can’t resist mentioning that at ADASS, I’m having a poster that’s little more than an ad blitz for STC in the registry.

Update (2019-10-13): Well, one week later I’m sitting in the closing session of the Interop, and I’ve even already given my summary of Semantics activities during the interop. Other topics I’ve talked about at this interop include interoperable authentication (I’m really interested in this because I’d like to enable persistent TAP uploads, where your uploaded tables are still there for you when you come back), a minor update to SimpleDALRegExt (which is overall rather technical and you probably don’t want to look at), on the takeup of new Registry tech (which might come over as somewhat sad, but considering that you have to pull along many people to have changes in “the” Registry, it’s not so bad at all), and on, as Mark Taylor called it, operational identification of server software (which I consider entertaining in its somewhat erratic narrative).

And now, after 7 days of essential nonstop discussion and brainstorming, I’m longing to slump into a chair on the train back to Heidelberg and just enjoy the landscape rolling by.

ADQL Traps #1: NULL

0≠NULL≠NULL
NULL is a difficult concept. Not only in SQL

I recently got embarrassed by ADQL NULLs, i.e., the magic value indicating that a value in a given column is missing. And since that’s a common source of errors when writing ADQL queries, I’ll take this as a cue for a blog post.

The concrete background is fairly technical and registry-ish; suffice it to say that some data providers who implemented interfaces conforming to some standard didn’t properly say so in their registry records. Back in RegTAP 1.0 (that’s the standard that says how a client like TOPCAT talks to the VO Registry), I decided to work around that by fudging the pattern for how to discover those interfaces so they’d still be found.

In RegTAP 1.1, which is now under review by the VO community, I wanted to do away with that workaround. But would that break anything? This question translates to “are there vs:ParamHTTP interfaces that don’t have a role attribute of std”. Whatever “ParamHTTP” and “role attribute” actually mean, just appreciate that it looks like it might translate into SQL like

select * from rr.interface
where
  intf_type='vr:paramhttp'
  and not intf_role='std'

I ran that query, rejoiced because it didn’t return anything, removed the workarund from the standard, and then was shot down when I read Mark’s mail (politely) saying I’m wrong and there are services still requiring the workaround. As usual: If a query returns what you expect, be double careful.

What went wrong? Well, NULL semantics. You see, in SQL NULL is never equal to anything, not even itself (it’s like NaN in IEEE floats in that: try n = float('nan');print(n==n) in Python and look again if you’re cool about it). It’s also not unequal. Don’t take my word for it. Try

select * from tap_schema.schemas where NULL=NULL

and

select * from tap_schema.schemas where NULL!=NULL

– you’ll get empty results in both cases.

What does that mean for science queries? Well, whenever there’s NULLs in columns (and the only safe assumption for now is that they may hide in there; we should probably add nun-null as a column property in the tap schema and in VODataService some day), you need to be careful in particular with inverted logic.

Here’s an example: Suppose you want to investigate NGC objects brighter than 10 mag in B in one bin in everything else in another. The ones brighter are simple:

select count(*) from openngc.data where mag_b<10

(try it on the TAP server at http://dc.g-vo.org/tap, it’s 383 in the current release). It becomes difficult for “the rest”. If you write

select count(*) from openngc.data where not mag_b<10

or, equivalently,

select count(*) from openngc.data where mag_b>=10

you’ll get (for the current release) 10887. However, the whole catalogue has 13954 entries, so there’s 13954-10887-383=2684 rows missing. Your “rest” has missed everything for which mag_b isn’t given. Sure enough,

select count(*) from openngc.data where mag_b is null

(and this is the only good way to compare against null) gives 2684.

The right way to say “anything for which mag_b is not smaller than 10” thus is

select count(*) from openngc.data 
where 
  not mag_b<10
  or mag_b is null

Morale: Unless you’re sure there are no missing values (i.e., NULLs) in a column you’re looking at, think about what these mean to your research (or other) question: Should these rows just vanish? Then you usually don’t need to do anything and the SQL semantics magically do the right thing (which is why things are defined as they are). If, however, the corresponding rows would mean something to your question, you need to be explicit, and you must have some condition involving IS NULL or IS NOT NULL.

The trouble, of course, is that just knowing this still isn’t enough. You need to remember it in the right moment. Or you’ll share my fate of suffering some public embarrassement.

APPLAUSE via Obscore

[a composite of two rather noisy photo plates]
Aladin showing some Bamberg Sky Patrol plates (see towards the end of the post for what this is and how I made it).

At the Astroplate conference I blogged about recently, the people behind APPLAUSE gave a couple of talks about their Data Release 3. APPLAUSE is a fairly massive endeavour to make available data from some of the larger plate archives in Germany, and its DR3 even hit the non-Astronomy press last February.

Already for previous APPLAUSE releases, I’ve wanted to bring this data (or rather, its metadata) to the VO, but it never quite happened, basically because there was always another little thing that turned out to be too tedious to work out via mail. However, working out things interactively is exactly what conferences are great for. So, the kind APPLAUSE folks (thanks, Taavi and Harry) and I used the Astroplate to map their database schema (“schema” is jargon for what boils down to the set of tables and columns with which they describe their data) to the much simpler (and, admittedly, less powerful) IVOA Obscore one.

Sure, Obscore doesn’t deal with multiple exposures (like when the target field and the north pole were exposed on one plate to help precision photometry), object-guided images, and all the other interesting techniques that astronomers applied in the pre-digital age; it also doesn’t usefully cope with multiple scans of the same plate (for instance, to correct for imprecisions in the mechanics of flatbed scanners). APPLAUSE, of course, has to cope with them, since there are many reasons to preserve data of this kind.

Obscore, on the other hand, is geared towards uniform discovery, where too funky datasets in all likelihood cause more harm than good. So, when we mapped APPLAUSE to Obscore, of the 101138 scans of 70276 plates that the full APPLAUSE holds in DR3, only 44000 plate scans made it into the Obscore table. The advantage: whatever can be sensibly mapped to Obscore can now be queried together with all the other data in the world that others have published through Obscore.

You can immediately see the effect when you run the little python program doing the global discovery we gave in our plates tutorial. Here’s what it prints now (values from pre-APPLAUSE-in-Obscore are in square brackets):

Column t_exptime: 3460 values
  Min   12, Max 15300, Mean 890.24  [previous mean: 370.722]
---
Column em_mean: 3801 values
  Min 1.8081e-09, Max 9.3e-07, Mean 6.40804e-07 [No change: Sigh!]
---
Column t_mean: 4731 values
  Min 12564.5, Max 58126.3, Mean 49897.9 [previous mean: 51909.1]
---
Column instrument_name: 4747 values
  Matches from , Petzval, [Max Wolf's residence in
  Heidelberg, Maerzgasse, Wolf's Doppelastrograph,
  Heidelberg Koenigstuhl (24), Wolf's
  Doppelastrograph,] AG-Astrograph, [Zeiss Triplet
  15 cm Potsdam-Telegrafenberg], Zeiss Triplet,
  Astrograph (four 10-cm Tessar f/6 cameras),
  [3.5m APO, ROSAT PSPCC, Heidelberg Koenigstuhl
  (24), Bruce Astrograph, Calar Alto (493),
  Schmidt], Grosser Refraktor, [ROSAT HRI,
  DK-1.54], Hamburger Schmidt-Spiegel,
  [DFOSC_FASU], ESO 1-metre Schmidt telescope,
  Great Schmidt Camera, Lippert-Astrograph, Ross-B
  3", [AZT 22], Astrograph (six 10-cm Tessar f/6
  cameras), 1m-Spiegelteleskop, [ROSAT PSPCB],
  Astrograph (ten 10-cm Tessar f/6 cameras), Zeiss
  Objective
---
Column access_url: 4747 values [4067]

So – for the fields selected in the tutorial, there are 15% more images in the global Obscore image pool now than there were before APPLAUSE, and their mean observation date went a bit farther into the past. I’ve not made any statistics, but I suspect for many other fields the gain is going to be much higher. For a strong effect, try some random region covered by the Bamberg Sky Patrol on the southern sky.

But you have probably noticed the deep sigh in the annotations to the statistics above: Yes, we don’t have the spectral band for the APPLAUSE data, which is why the stats on em_min doesn’t change. As a matter of fact, from the Obscore data you can’t even guess whether a plate is “more red” or “rather blue”, as Obscore doesn’t have an (agreed-upon) field for “qualititive bandpass indicator”.

For some other data collections, we did map known emulsion/filter combinations to rough bandpasses (e.g., the Palomar-Leiden Trojan Survey, which only had a few of them). For APPLAUSE, there are 435 combinations of filter and emulsion (that’s a VOTable link that you can paste into TOPCAT’s load button in order to have a look at the table). Granted, quite a few of these pairs are (more or less) spurious because of inconsistent spelling. But we still gave up on researching the bandpasses even before we started.

If you’re a photographic plate buff: You could help us and posteriority a lot if you could go through this list and at least for some combinations tell us what, roughly, the lower and upper limits of the corresponding bandpasses might have been (what DaCHS already knows, plate-relevant data near the bottom of the file). As usual, send mail to gavo@ari.uni-heidelberg.de if you have anything to contribute.

Finally, here’s the brief explanation of the image for this article: Well, I wanted to find some Bamberg Sky Patrol images for a single field to play with. I knew they were primarily located in the South, and were made using Tessar cameras. So, I ran

SELECT t_min, access_url, s_region
FROM ivoa.obscore
WHERE instrument_name like '%Tessar%'
AND 1=CONTAINS(POINT(345, -38), s_region)

on GAVO’s TAP service. Since Aladin 10, you can do that from within the program (although some versions will reject this query because they mistakenly believe the ADQL is bad. Query through TOPCAT and send the result over to Aladin if that bites you). Incidentally, when there are s_region values in Obscore tables, it’s a good idea to use them as I do here, as it’s quite a bit more likely that this query will use indices than some condition on s_ra and s_dec. But then not all services fill s_region properly, so for all-VO queries you will probably want to make do with s_ra and s_dec.

From that result I first made the inset bar graph in the article image to show the temporal distribution of the Patrol plates. And then I grabbed two (rather randomly selected) plates and had Aladin produce a red-blue composite of them. Whatever is really red or really blue in that image may correspond to a transient event. Or, as certainly the case with that little hair (or whatever) that shines out in blue, it may not.

Find Outliers using ADQL and TAP

[Annie Cannon's notebook and a plot]
Two pages from Annie Cannon’s notebooks1, and a histogram of the basic BP-RP color distribution in the HD catalogue (blue) and the distribution of the outliers (red). For more of Annie Cannon’s notebooks, search on ADS.

The other day I gave one of my improvised live demos (“What, roughly, are you working on?”) and I ended up needing to translate identifiers from the Henry Draper Catalogue to modern positions. Quickly typing “Henry Draper” into TOPCAT’s TAP search window didn’t yield anything useful (some resources only using the HD, and a TAP service that didn’t support uploads – hmpf).

Now, had I tried the somewhat more thorough WIRR Registry interface, I’d have noted the HD catalogue at VizieR and in particular Fabricius’ et al’s HD-Tycho 2 match (explaining why they didn’t show up in TOPCAT is a longer story; we’re working on it). But alas, I didn’t, and so I set out to produce a catalogue matching HD and Gaia DR2, easily findable from within TOPCAT’s TAP client. Well, it’s here in the form of the hdgaia.main table in our data center.

Considering the nontrivial data discovery and some yak shaving I had to do to get from HD identifiers to Gaia DR2 ones, it was perhaps not as futile an exercise as I had thought now and then during the preparation of the thing. And it gives me the chance to show a nice ADQL technique to locate outliers.

In this case, one might ask: Which objects might Annie Cannon and colleagues have misclassified? Or perhaps the objects have changed their spectrum between the time Cannon’s photographic plates have been taken and Gaia observed them? Whatever it is: We’ll have to figure out where there are unusual BP-RPs given the spectral type from HD.

To figure this out, we’ll first have to determine what’s “usual”. If you’ve worked through our ADQL course, you know what to expect: grouping. So, to get a table of average colours by spectral type, you’d say (all queries executable on the TAP service at http://dc.g-vo.org/tap):

select spectral, 
  avg(phot_bp_mean_mag-phot_rp_mean_mag) as col,
  count(*) as ct
from hdgaia.main
join gaia.dr2light
using (source_id)
group by spectral

– apart from the join that’s needed here because we want to pull photometry from gaia, that’s standard fare. And that join is the selling point of this catalog, so I won’t apologise for using it already in the first query.

The next question is how strict we want to be before we say something that doesn’t have the expected colour is unusual. While these days you can rather easily use actual distributions, at least for an initial analysis just assuming a Gaussian and estimating its FWHM as the standard deviation works pretty well if your data isn’t excessively nasty. Regrettably, there is no aggregate function STDDEV in ADQL (you could still ask for it: head over to the DAL mailing list before ADQL 2.1 is a done deal!). However, you may remember that Var(X)=E(X2)-E(X)2, that the average is an estimator for the expectation, and that the standard deviation is actually an estimator for the square root of the variance. And that these estimators will work like a charm if you’re actually dealing with Gaussian data.

So, let’s use that to compute our standard deviations. While we are at it, throw out everything that’s not a star2, and ensure that our groups have enough members to make our estimates non-ridiculous; that last bit is done through a HAVING clause that essentially works like a WHERE, just for entire GROUPs:

select spectral, 
  avg(phot_bp_mean_mag-phot_rp_mean_mag) as col,
  sqrt(avg(power(phot_bp_mean_mag-phot_rp_mean_mag, 2))-
    power(avg(phot_bp_mean_mag-phot_rp_mean_mag), 2)) as sig_col,
  count(*) as ct
from hdgaia.main
join gaia.dr2light
  using (source_id)
where m_v<18
group by spectral
having count(*)>10

This may look a bit scary, but if you read it line by line, I’d argue it’s no worse than our harmless first GROUP BY query.

From here, the step to determine the outliers isn’t big any more. What the query I’ve just written produces is a mapping from spectral type to the means and scales (“µ,σ” in the rotten jargon of astronomy) of the Gaussians for the colors of the stars having that spectral type. So, all we need to do is join that information by spectral type to the original table and then see which actual colors are further off than, say, three sigma. This is a nice application of the common table expressions I’ve tried to sell you in the post on ADQL 2.1; our determine-what’s-usual query from above stays nicely separated from the (largely trivial) rest:

with standards as (select spectral, 
  avg(phot_bp_mean_mag-phot_rp_mean_mag) as col,
  sqrt(avg(power(phot_bp_mean_mag-phot_rp_mean_mag, 2))-
    power(avg(phot_bp_mean_mag-phot_rp_mean_mag), 2)) as sig_col,
  count(*) as ct
  from hdgaia.main
  join gaia.dr2light
  using (source_id)
  where m_v<18
  group by spectral
  having count(*)>10)
select * 
from hdgaia.main
join standards 
using (spectral)
join gaia.dr2light using (source_id)
where 
  abs(phot_bp_mean_mag-phot_rp_mean_mag-col)>3*sig_col
  and m_v<18

– and that's a fairly general pattern for doing an initial outlier analysis on the the remote side. For HD, this takes a few seconds and yields 2722 rows (at least until we also push HDE into the table). That means you can keep 99% of the rows (the boring ones) on the server and can just pull the ones that could be interesting. These 99% savings aren't terribly much with a catalogue like the HD that's small by today's standards. For large catalogs, it's the difference between a download of a couple of minutes and pulling data for a day while frantically freeing disk space.

By the way, that there's only 2.7e3 outliers among 2.25e5 objects, while Annie Cannon, Williamina Fleming, Antonia Maury, Edward Pickering, and the rest of the crew not only had to come up with the spectral classification while working on the catalogue but also had to classify all these objects manually, this is an amazing feat even if all of those rows actually were misclassifications (which they certainly aren't) – the machine classifiers of today would be proud to only get 1% wrong.

The inset in the facsimile of Annie Cannons notebooks above shows how the outliers are distributed in color space relative to the full catalogue, where the basic catalogue is in blue and the outliers (scaled by 70) in red. Wouldn't it make a nice little side project to figure out the reason for the outlier clump on the red side of the histogram?


1The notebook pages are from a notebook Annie Cannon used in 1929. The material was kindly provided by Project PHAEDRA at the John G. Wolbach Library, Harvard College Observatory.
2I'll not hide that I was severely tempted to undo the mapping of object classes to – for HD – unrealistic magnitudes (20 .. 50) but then left the HD as it came from ADC; I still doubt that decision was well taken, and sure enough, the example query above already has insane constraints on m_v reflecting that encoding. From today's position, of course there should have been an extra column or, better yet, a different catalogue for nonstellar objects. Ah well. It's always hard to break unhealty patterns.