• Doing Large-Scale ADQL Queries

    You can do many interesting things with TAP and ADQL while just running queries returning a few thousand rows after a few seconds. Most examples you would find in tutorials are of that type, and when the right indexes exist on the queried tables, the scope of, let's say, casual ADQL goes far beyond toy examples.

    Actually, arranging things such that you only fetch the data you need for the analysis at hand – and that often is not much more than the couple of kilobytes that go into a plot or a regression or whatever – is a big reason why TAP and ADQL were invented in the first place.

    But there are times when the right indexes are not in place, or when you absolutely have to do something for almost everything in a large table. Database folks call that a sequential scan or seqscan for short. For larger tables (to give an order of magnitude: beyond 107 rows in my data centre, but that obviously depends), this means you have to allow for longer run times. There are even times when you may need to fetch large portions of such a large table, which means you will probably run into hard match limits when there is just no way to retrieve your full result set in one go.

    This post is about ways to deal with such situations. But let me state already that having to go these paths (in particular the partitioning we will get to towards the end of the post) may be a sign that you want to re-think what you are doing, and below I am briefly giving pointers on that, too.

    Raising the Match Limit

    Most TAP services will not let you retrieve arbitrarily many rows in one go. Mine, for instance, at this point will snip results off at 20'000 rows by default, mainly to protect you and your network connection against being swamped by huge results you did not expect.

    You can, and frequently will have to (even for an all-sky level 6 HEALPix map, for instance, as that will retrieve 49'152 rows), raise that match limit. In TOPCAT, that is done through a little combo box above the query input (you can enter custom values if you want):

    A screenshot with a few widgets from TOPCAT.  A combo box is opened, and the selection is on "20000 (default)".

    If you are somewhat confident that you know what you are doing, there is nothing wrong with picking the maximum limit right away. On the other hand, if you are not prepared to do something sensible with, say, two million rows, then perhaps put in a smaller limit just to be sure.

    In pyVO, which we will be using in the rest of this post, this is the maxrec argument to run_sync and its sibling methods on TAPService.

    Giving the Query More Time

    When dealing with non-trivial queries on large tables, you will often also have to give the query some extra time. On my service, for instance, you only have a few seconds of CPU time when your client uses TAP's synchronous mode (by calling TAPService.run_sync method). If your query needs more time, you will have to go async. In the simplest case, all that takes is write run_async rather than run_sync (below, we will use a somewhat more involved API; find out more about this in our pyVO course).

    In async mode, you have two hours on my box at this point; this kind of time limit is, I think, fairly typical. If even that is not enough, you can ask for more time by changing the job's execution_duration parameter (before submitting it to the database engine; you cannot change the execution duration of a running job, sorry).

    Let us take the example of a colour-magnitude diagram for stars in Gaia DR3 with distances of about 300 pc according to Bailer-Jones et al (2021); to make things a bit more entertaining, we want to load the result in TOPCAT without first downloading it locally; instead, we will transmit the result's URI directly to TOPCAT[1], which means that your code does not have to parse and re-package the (potentially large) data.

    On the first reading, focus on the main function, though; the SAMP fun is for later:

    import time
    import pyvo
    
    QUERY = """
    SELECT
        source_id, phot_g_mean_mag, pseudocolour,
        pseudocolour_error, phot_g_mean_flux_over_error
    FROM gedr3dist.litewithdist
    WHERE
        r_med_photogeo between 290 and 310
        AND ruwe<1.4
        AND pseudocolour BETWEEN 1.0 AND 1.8
    """
    
    def send_table_url_to_topcat(conn, table_url):
        client_id = pyvo.samp.find_client_id(conn, "topcat")
        message = {
            "samp.mtype": "table.load.votable",
            "samp.params": {
                "url": table_url,
                "name": "TAP result",}
        }
        conn.notify(client_id, message)
    
    
    def main():
        svc = pyvo.dal.TAPService("http://dc.g-vo.org/tap")
        job = svc.submit_job(QUERY, maxrec=3000)
        try:
            job.execution_duration=10000  # that's 10000 seconds
            job.run()
            job.wait()
            assert job.phase=="COMPLETED"
    
            with pyvo.samp.connection(addr="127.0.0.1") as conn:
                send_table_url_to_topcat(conn, job.result_uri)
        finally:
            job.delete()
    
    if __name__=="__main__":
        main()
    

    As written, this will be fast thanks to maxrec=3000, and you wouldn't really have to bother with async just yet. The result looks nicely familiar, which means that in that distance range, the Bailer-Jones distances are pretty good:

    A rather sparsely populated colour-magnitude diagram with a pretty visible main sequence.

    Now raise the match limit to 30000, and you will already need async. Here is what the result looks like:

    A more densely populated colour-magnitude diagram with a pretty visible main sequence, where a giant branch starts to show up.

    Ha! Numbers matter: at least we are seeing a nice giant branch now! And of course the dot colours do not represent the colours of the stars with the respective pseudocolour; the directions of blue and red are ok, but most of what you are seeing here will look rather ruddy in reality.

    You will not really need to change execution_duration here, nor will you need it even when setting maxrec=1000000 (or anything more, for that matter, as the full result set size is 330'545), as that ends up finishing within something like ten minutes. Incidentally, the result for the entire 300 pc shell, now as a saner density plot, looks like this:

    A full colour-magnitude diagram with densities coded in colours. A huge blob is at the red end of the main sequence, and there is a well-defined giant branch and a very visible horizontal branch.

    Ha! Numbers matter even more. There is now even a (to me surprisingly clear) horizontal branch in the plot.

    Planning for Large Result Sets? Get in Contact!

    Note that if you were after a global colour-magnitude diagram as the one I have just shown, you should probably do server-side aggregation (that is: compute the densities in a few hundred or thousand bins on the server and only retrieve those then) rather than load ever larger result sets and then have the aggregation be performed by TOPCAT. More generally, it usually pays to try and optimise ADQL queries that are slow and have huge result sets before fiddling with async and, even more, with partitioning.

    Most operators will be happy to help you do that; you will find some contact information in TOPCAT's service tab, for instance. In pyVO, you could use the get_contact method of the objects you get back from the Registry API[2]:

    >>> pyvo.registry.search(ivoid="ivo://org.gavo.dc/tap")[0].get_contact()
    'GAVO Data Centre Team (+49 6221 54 1837) <gavo@ari.uni-heidelberg.de>'
    

    That said: sometimes neither optimisation nor server-side aggregation will do it: You just have to pull more rows than the service's match limit. You see, most servers will not let you pull billions of rows in one go. Mine, for instance, will cap the maxrec at 16'000'000. What you need to do if you need to pull more than that is chunking up your query such that you can process the whole sky (or whatever else huge thing makes the table large) in manageable chunks. That is called partitioning.

    Uniform-Length Partitions

    To partition a table, you first need something to partition on. In database lingo, a good thing to partition on is called a primary key, typically a reasonably short string or, even better, an integer that maps injectively to the rows (i.e., not two rows have the same key). Let's keep Gaia as an example: the primary key designed for it is the source_id.

    In the simplest case, you can “uniformly” partition between 0 and the largest source_id, which you will find by querying for the maximum:

    SELECT max(source_id) FROM gaia.dr3lite
    

    This should be fast. If it is not, then there is likely no sufficiently capable index on the column you picked, and hence your choice of the primary key probably is not a good one. This would be another reason to turn to the service's contact address as above.

    In the present case, the query is fast and yields 6917528997577384320. With that number, you can write a program like this to split up your problem into N_PART sub-problems:

    import pyvo
    
    MAX_ID, N_PART = 6917528997577384320+1, 100
    partition_limits = [(MAX_ID//N_PART)*i
      for i in range(N_PART+1)]
    
    svc = pyvo.dal.TAPService("http://dc.g-vo.org/tap")
    main_query = "SELECT count(*) FROM ({part}) AS q"
    
    for lower, upper in zip(partition_limits[:-1], partition_limits[1:]):
      result = svc.run_sync(main_query.format(part=
        "SELECT * FROM gaia.dr3lite"
        "  WHERE source_id BETWEEN {} and {} ".format(lower, upper-1)))
      print(result)
    

    Exercise: Can you see why the +1 is necessary in the MAX_ID assignment?

    This range trick will obviously not work when the primary key is a string; I would probably partition by first letter(s) in that case.

    Equal-Size Partitions

    However, this is not the end of the story. Gaia's (well thought-out) enumeration scheme reflects to a large degree sky positions. So do, by the way, the IAU conventions for object designations. Since most astronomical objects are distributed highly unevenly on the sky, creating partitions with of equal size in identifier space will yield chunks of dramatically different (a factor of 100 is not uncommon) sizes in all-sky surveys.

    In the rather common event that you have a use case in which you need a guaranteed maximum result size per partition, you will therefore have to use two passes, first figuring out the distribution of objects and then computing the desired partition from that.

    Here is an example for how one might go about this:

    from astropy import table
    import pyvo
    
    MAX_ID, ROW_TARGET = 6917528997577384320+1, 10000000
    
    ENDPOINT = "http://dc.g-vo.org/tap"
    
    # the 10000 is just the number of bins to use; make it too small, and
    # your inital bins may already overflow ROW_TARGET
    ID_DIVISOR = MAX_ID/10000
    
    DISTRIBUTION_QUERY = f"""
    select round(source_id/{ID_DIVISOR}) as bin, count(*) as ct
    from gaia.dr3lite
    group by bin
    """
    
    
    def get_bin_sizes():
      """returns a ordered sequence of (bin_center, num_objects) rows.
      """
      # since the partitioning query already is expensive, cache it,
      # and use the cache if it's there.
      try:
        with open("partitions.vot", "rb") as f:
          tbl = table.Table.read(f)
      except IOError:
        # Fetch from source; takes about 1 hour
        print("Fetching partitions from source; this will take a while"
          " (provide partitions.vot to avoid re-querying)")
        svc = pyvo.dal.TAPService(ENDPOINT)
        res = svc.run_async(DISTRIBUTION_QUERY, maxrec=1000000)
        tbl = res.table
        with open("partitions.vot", "wb") as f:
          tbl.write(output=f, format="votable")
    
      res = [(row["bin"], row["ct"]) for row in tbl]
      res.sort()
      return res
    
    
    def get_partition_limits(bin_sizes):
      """returns a list of limits of source_id ranges exhausting the whole
      catalogue.
    
      bin_sizes is what get_bin_sizes returns (and it must be sorted by
      bin center).
      """
      limits, cur_count = [0], 0
      for bin_center, bin_count in bin_sizes:
        if cur_count+bin_count>MAX_ROWS:
          limits.append(int(bin_center*ID_DIVISOR-ID_DIVISOR/2))
          cur_count = 0
        cur_count += bin_count
      limits.append(MAX_ID)
      return limits
    
    
    def get_data_for(svc, query, low, high):
      """returns a TAP result for the (simple) query in the partition
      between low and high.
    
      query needs to query the ``sample`` table.
      """
      job = svc.submit_job("WITH sample AS "
        "(SELECT * FROM gaia.dr3lite"
        "  WHERE source_id BETWEEN {} and {}) ".format(lower, upper-1)
        +query, maxrec=ROW_TARGET)
      try:
        job.run()
        job.wait()
        return job.fetch_result()
      finally:
        job.delete()
    
    
    def main():
      svc = pyvo.dal.TAPService(ENDPOINT)
      limits = get_partition_limits(get_bin_sizes())
      for ct, (low, high) in enumerate(zip(limits[:-1], limits[1:])):
        print("{}/{}".format(ct, len(limits)))
        res = get_data_for(svc, <a query over a table sample>, low, high-1)
        # do your thing here
    

    But let me stress again: If you think you need partitioning, you are probably doing it wrong. One last time: If in any sort of doubt, try the services' contact addresses.

    [1]Of course, if you are doing long-running queries, you probably will postpone the deletion of the service until you are sure you have the result wherever you want it. Me, I'd probably print the result URL (for when something goes wrong on SAMP or in TOPCAT) and a curl command line to delete the job when done. Oh, and perhaps a reminder that one ought to execute the curl command line once the data is saved.
    [2]Exposing the contact information in the service objects themselves would be a nice little project if you are looking for contributions you could make to pyVO; you would probably do a natural join between the rr.interface and the rr.res_role tables and thus go from the access URL (you generally don't have the ivoid in pyVO service objects) to the contact role.
  • DaCHS 2.11: Persistent TAP Uploads

    The DaCHS logo, a badger's head and the text "VO Data Publishing"

    The traditional autumn release of GAVO's server package DaCHS is somewhat late this year, but not so late that could not still claim it comes after the interop. So, here it is: DaCHS 2.11 and the traditional what's new post.

    But first, while I may have DaCHS operators' attention: If you have always wondered why things in DaCHS are as they are, you will probably enjoy the article Declarative Data Publication with DaCHS, which one day will be in the proceedings of ADASS XXXIV (and before that probably on arXiv). You can read it in a pre-preprint version already now at https://docs.g-vo.org/I301.pdf, and feedback is most welcome.

    Persistent TAP Uploads

    The potentially most important new feature of DaCHS 2.11 (in my opinion) will not be news to regular readers of this blog: Persistent TAP Uploads.

    At this point, no client supports this, and presumably when clients do support it, it will look somewhat different, but if you like the bleeding edge and have users that don't mind an occasional curl or requests call, you would be more than welcome to help try the persistent uploads. As an operator, it should be sufficient to type:

    dachs imp //tap_user
    

    To make this more useful, you probably want to hand out proper credentials (make them with dachs adm adduser) to people who want to play with this, and point the interested users to the demo jupyter notebook.

    I am of course grateful for any feedback, in particular on how people find ways to use these features to give operators a headache. For instance, I really would like to avoid writing a quota system. But I strongly suspect will have to…

    On-loaded Execute-s

    DaCHS has a built-in cron-type mechanism, the execute Element. So far, you could tell it to run jobs every x seconds or at certain times of the day. That is fine for what this was made for: updates of “living” data. For instance, the RegTAP RD (which is what's behind the Registry service you are probably using if you are reading this) has something like this:

    <execute title="harvest RofR" every="40000">
      <job><code>
          execDef.spawnPython("bin/harvestRofR.py")
      </code></job>
    </execute>
    

    This will pull in new publishing registries from the Registry of Registries, though that is tangential; the main thing is that some code will run every 40 kiloseconds (or about 12 hours).

    Against using plain cron, the advantage is that DaCHS knows context (for instance, the RD's resdir is not necessary in the example call), that you can sync with DaCHS' own facilities, and most of all that everything is in once place and can be moved together. By the way, it is surprisingly simple to run a RegTAP service of your own if you already run DaCHS. Feel free to inquire if you are interested.

    In DaCHS 2.11, I extended this facility to include “events” in the life of an RD. The use case seems rather remote from living data: Sometimes you have code you want to share between, say, a datalink service and some ingestion code. This is too resource-bound for keeping it in the local namespace, and that would again violate RD locality on top.

    So, the functions somehow need to sit on the RD, and something needs to stick them there. To do that, I recommended a rather hacky technique with a LOOP with codeItems in the respective howDoI section. But that was clearly rather odious – and fragile on top because the RD you manipulated was just being parsed (but scroll down in the howDoI and you will still see it).

    Now, you can instead tell DaCHS to run your code when the RD has finished loading and everything should be in place. In a recent example I used this to have common functions to fetch photometric points. In an abridged version:

    <execute on="loaded" title="define functions"><job>
      <setup imports="h5py, numpy"/>
      <code>
      def get_photpoints(field, quadrant, quadrant_id):
        """returns the photometry points for the specified time series
        from the HDF5 as a numpy array.
    
        [...]
        """
        dest_path = "data/ROME-FIELD-{:02d}_quad{:d}_photometry.hdf5".format(
          field, quadrant)
        srchdf = h5py.File(rd.getAbsPath(dest_path))
        _, arr = next(iter(srchdf.items()))
    
        photpoints = arr[quadrant_id-1]
        photpoints = numpy.array(photpoints)
        photpoints[photpoints==0] = numpy.nan
        photpoints[photpoints==-9999.99] = numpy.nan
    
        return photpoints
    
    
      def get_photpoints_for_rome_id(rome_id):
        """as get_photpoints, but taking an integer rome_id.
        """
        field = rome_id//10000000
        quadrant = (rome_id//1000000)%10
        quadrant_id = (rome_id%1000000)
        base.ui.notifyInfo(f"{field} {quadrant} {quadrant_id}")
        return get_photpoints(field, quadrant, quadrant_id)
    
      rd.get_photpoints = get_photpoints
      rd.get_photpoints_for_rome_id = get_photpoints_for_rome_id
    </code></job></execute>
    

    (full version; if this is asking you to log in, tell your browser not to wantonly switch to https). What is done here in detail again is not terribly relevant: it's the usual messing around with identifiers and paths and more or less broken null values that is a data publisher's everyday lot. The important thing is that with the last two statements, you will see these functions whereever you see the RD, which in RD-near Python code is just about everywhere.

    dachs start taptable

    Since 2018, DaCHS has supported kickstarting the authoring of RDs, which is, I claim, the fun part of a data publisher's tasks, through a set of templates mildly customised by the dachs start command. Nobody should start a data publication with an empty editor window any more. Just pass the sort of data you would like to publish and start answering sensible questions. Well, “sort of data” within reason:

    $ dachs start list
    epntap -- Solar system data via EPN-TAP 2.0
    siap -- Image collections via SIAP2 and TAP
    scs -- Catalogs via SCS and TAP
    ssap+datalink -- Spectra via SSAP and TAP, going through datalink
    taptable -- Any sort of data via a plain TAP table
    

    There is a new entry in this list in 2.11: taptable. In both my own work and watching other DaCHS operators, I have noticed that my advice “if you want to TAP-publish any old material, just take the SCS template and remove everything that has scs in it” was not a good one. It is not as simple as that. I hope taptable fits better.

    A plan for 2.12 would be to make the ssap+datalink template less of a nightmare. So far, you basically have to fill out the whole thing before you can start experimenting, and that is not right. Being able to work incrementally is a big morale booster.

    VOTable 1.5

    VOTable 1.5 (at this point still a proposed recommendation) is a rather minor, cleanup-type update to the VO's main table format. Still, DaCHS has to say it is what it is if we want to be able to declare refposition in COOSYS (which we do). Operators should not notice much of this, but it is good to be aware of the change in case there are overeager VOTable parsers out there or in case you have played with DaCHS MIVOT generator; in 2.10, you could ask it to do its spiel by requesting the format application/x-votable+xml;version=1.5. In 2.11, it's application/x-votable+xml;version=1.6. If you have no idea what I was just saying, relax. If this becomes important, you will meet it somewhere else.

    Minor Changes

    That's almost it for the more noteworthy news; as usual, there are a plethora of minor improvements, bug fixes and the like. Let me briefly mention a few of these:

    • The ADQL form interface's registry record now includes the site name. In case you are in this list, please say dachs pub //adql after upgrading.
    • More visible legal info, temporal, and spatial coverage in table and service infos; one more reason to regularly run dachs limits!
    • VOUnit's % is now known to DaCHS (it should have been since about 2.9)
    • More vocabulary validation for VOResource generation; so, dachs pub might now complain to you when it previously did not. It is now right and was wrong before.
    • If you annotate a column as meta.bib.bibcode, it will be rendered as ADS links
    • The RD info links to resrecs (non-DaCHS resources, essentially), too.

    Upgrade As Convenient

    As usual, if you have the GAVO repository enabled, the upgrade will happen as part of your normal Debian apt upgrade. Still, if you have not done so recently, have a quick look at upgrading in the tutorial. If, on the other hand, you use the Debian-distributed DaCHS package and you do not need any of the new features, you can let things sit and enjoy the new features after your next dist-upgrade.

  • At the Malta Interop

    A bonze statue of a running man with a newspaper in his hand in front of a massive stone wall.

    The IVOA meets in Malta, which sports lots of walls and fortifications. And a “socialist martyr” boldly stepping forward (like the IVOA, of course): Manwel Dimech.

    It is Interop time again! Most people working on the Virtual Observatory are convened in Malta at the moment and will discuss the development and reality of our standards for the next two days. As usual, I will report here on my thoughts and the proceedings as I go along, even though it will be a fairly short meeting: In northen autumn, the Interop always is back-to-back with ADASS, which means that most participants already have 3½ days of intense meetings behind them and will probably be particularly glad when we will conclude the Interop Sunday noon.

    The TCG discusses (Thursday, 15:00)

    Right now, I am sitting in a session of the Technical Coordination Group, where the chairs and vice-chairs of the Working and Interest Groups meet and map out where they want to go and how it all will fit together. If you look at this meeting's agenda, you can probably guess that this is a roller coaster of tech and paperwork, quickly changing from extremely boring to extremely exciting.

    For me up to now, the discussion about whether or not we want LineTAP at all was the most relevant agenda item; while I do think VAMDC would win by taking up the modern IVOA TAP and Registry standards (VAMDC was forked from the VO in the late 2000s), takeup has been meagre so far, and so perhaps this is solving a problem that nobody feels[1]. I have frankly (almost) only started LineTAP to avoid a SLAP2 with an accompanying data model that would then compete with XSAMS, the data model below VAMDC.

    On the other hand: I think LineTAP works so much more nicely than VAMDC for its use case (identify spectral lines in a plot) that it would be a pity to simply bury it.

    By the way, if you want, you can follow the (public; the TCG meeting is closed) proceedings online; zoom links are available from the programme page. There will be recordings later.

    At the Exec Session (Thurday, 16:45)

    The IVOA's Exec is where the heads of the national projects meet, with the most noble task of endorsing our recommendations and otherwise providing a certain amount of governance. The agenda of Exec meetings is public, and so will the minutes be, but otherwise this again is a closed meeting so everyone feels comfortable speaking out. I certainly will not spill any secrets in this post, but rest assured that there are not many of those to begin with.

    That I am in here is because GAVO's actual head, Joachim, is not on Malta and could not make it for video participation, either. But then someone from GAVO ought to be here, if only because a year down the road, we will host the Interop: In the northern autumn of 2025, the ADASS and the Interop will take place in Görlitz (regular readers of this blog have heard of that town before), and so I see part of my role in this session in reconfirming that we are on it.

    Meanwhile, the next Interop – and determining places is also the Exec's job – will be in the beginning of June 2025 in College Park, Maryland. So much for avoiding flight shame for me (which I could for Malta that still is reachable by train and ferry, if not very easily).

    Opening Plenary (Friday 9:30)

    A lecture hall with people, a slide “The University of Malta” at the wall.

    Alessio welcomes the Interop crowd to the University of Malta.

    Interops always begin with a plenary with reports from the various functions: The chair of the Exec, the chair of the committee of science priorities, and chair of technical coordination group. Most importantly, though, the chairs of the working and interest groups report on what has happened in their groups in the past semester, and what they are planning for the Interop (“Charge to the working groups”).

    For me personally, the kind words during Simon's State of the IVOA report on my VO lecture (parts of which he has actually reused) were particularly welcome.

    But of course there was other good news in that talk. With my Registry grandmaster hat on, I was happy to learn that NOIRLabs has released a simple publishing registry implementation, and that ASVO's (i.e., Australia) large TAP server will finally be properly registered, too. The prize for the coolest image, though, goes to VO France and in particular their solar system folks, who have used TOPCAT to visualise data on a model of comet 67P Churyumov–Gerasimenko (PDF page 20).

    Self-Agency (Friday, 10:10)

    A slide with quite a bit of text.  Highlighted: “Dropped freq_min/max“

    I have to admit it's kind of silly to pick out this particular point from all the material discussed by the IG and WG chairs in the Charge to the Working Groups, but a part of why this job is so gratifying is experiences of self-agency. I just had one of these during the Radio IG report: They have dropped the duplication of spectral information in their proposed extension to obscore.

    Yay! I have lobbied for that one for a long time on grounds that if there is both em_min/em_max and f_min/f_max in an obscore records (which express the same thing, with em_X being wavelengths in metres, and f_X frequencies in… something else, where proposals included Hz, MHz and GHz), it is virtually certain that at least one pair is wrong. Most likely, both of them will be. I have actually created a UDF for ADQL queries to make that point. And now: Success!

    Focus Session: High Energy and Time Domain (Friday, 12:00)

    The first “working” session of the Interop is a plenary on High Energy and Time Domain, that is, instruments that look for messenger particles that may have the energy of a tennis ball, as well as ways to let everyone else know about them quickly.

    Incidentally, that “quickly” is a reason for why the two apparently unconnected topics share a session: Particles in the tennis ball range are fortunately rare (or our DNA would be in trouble), and so when you have found one, you might want make sure everone else gets to look whether something odd shows up where that particle came from in other messengers (as in: optical photons, say). This is also relevant because many detectors in that energy (and particle) range do not have a particularly good idea of where the signal came from, and followups in other wavelengths may help figuring out what sort of thing may have produced a signal.

    I enjoyed a slide by Jutta, who reported on VO publication of km3net data, that is, neutrinos detected in a large detector cube below the Mediterrenean sea, using the Earth as a filter:

    Screenshot of a slide: “What we do: Point source analysis, Alerts and follow-ups; What we don't do: Mission planning, Nice pictures.”

    “We don't do pretty pictures“ is of course a very cool thing one can say, although I bet this is not 120% honest. But I am willing to give Jutta quite a bit of slack; after all, km3net data is served through DaCHS, and I am still hopeful that we will use it to prototype serving more complex data products than just plain event lists in the future.

    A bit later in the session, an excellent question was raised by Judy Racusin in her talk on GCN:

    A talk slide, with highlighted text: “Big Question: Why hasn't this [VOEvent] continued to serve the needs of various transient astrophysics communities?”

    The background of the question is that there is a rather reasonable standard for the dissemination of alerts and similar data, VOEvent. This has seen quite a bit of takeup in the 2000s, but, as evinced by page 17 of Judy's slides, all the current large time-domain projects decided to invent something new, and it seems each one invented something different.

    I don't have an definitive answer to why and how that happened (as opposed to, for instance, everyone cooperating on evolving VOEvent to match whatever requirements these projects have), although outside pressures (e.g., the rise of Apache Avro and Kafka) certainly played a role.

    I will, however, say that I strongly suspect that if the VOEvent community back then had had more public and registered streams consumed by standard software, it would have been a lot harder for these new projects to (essentially) ignore it. I'd suggest as a lesson to learn from that: make sure your infrastructure is public and widely consumed as early as you can. That ought to help a lot in ensuring that your standard(s) will live long and prosper.

    In Apps I (Friday 16:30)

    I am now in the Apps session. This is the most show-and-telly event you will get at an Interop, with largest likelihood of encountering the pretty pictures that Jutta had flamboyantly expressed disinterest in this morning. In the first talk already, Thomas delivers with, for instance, mystic pictures from Mars:

    A photo of Olympus Mons on Mars with overplotted contour lines.

    Most of the magic was shown in a live demo; once the recordings are online, consider checking this one out (I'll mention in passing that HiPS2MOC looks like a very useful feature, too).

    My talk, in contrast, had extremely boring slides; you're not missing out at all by simply reading the notes. The message is not overly nice, either: Rather do fewer features than optional ones, as a server operator please take up new standards as quickly as you can, and in the same role please provide good metadata. This last point happened to be a central message in Henrik's talk on ESASky (which aptly followed mine) as well, that, like Thomas', featured a live performance of eye candy.

    Mario Juric's talk on something called HATS then featured this nice plot:

    A presentation slide headed “partition hierarchically“, with all-sky heatmap featuring pixels of varying size.

    That's Gaia's source catalogue pixelated such that the sources in each pixel require about a constant processing time. The underlying idea, hierarchical tiling, is great and has proved itself extremely capable not only with HiPS, which is what is behind basically anything in the VO that lets you smoothly zoom, in particular Aladin's maps. HATS' basic premise seems to be to put tables (rather than JPEGs or FITS images as usual) into a HiPS structure. That has been done before, as with the catalogue HiPSes; Aladin users will remember the Gaia or Simbad layers. HATS, now, stores Parquet files, provides Pandas-like interfaces on top of them, and in particular has the nice property of handing out data chunks of roughly equal size.

    That is certainly great, in particular for the humongous data sets that Rubin (née LSST) will produce. But I wondered how well it will stand up when you want to combine different data collections of this sort. The good news: they have already tried it, and they even have thought about how pack HATS' API behind a TAP/ADQL interface. Excellent!

    Further great news in Brigitta's talk [warning: link to google]: It seems you can now store ipython (“Jupyter”) notebooks in, ah well, Markdown – at least in something that seems version-controllable. Note to self: look at that.

    Data Access Layer (Saturday 9:30)

    I am now sitting in the first session of the Data Access Layer Working Group. This is where we talk about the evolution of the protocols you will use if you “use the VO”: TAP, SIAP, and their ilk.

    Right at the start, Anastasia Laity spoke about a topic that has given me quite a bit of headache several times already: How do you tell simulated data from actual observations when you have just discovered a resource that looks relevant to your research?

    There is prior art for that in that SSAP has a data source metadata item on complete services, with values survey, pointed, custom, theory, or artificial (see also SimpleDALRegExt sect. 3.3, where the operational part of this is specified). But that's SSAP only. Should we have a place for that in registry records in general? Or even at the dataset level? This seems rather related to the recent addition of productTypeServed in the brand-new VODataService 1.3. Perhaps it's time for dataSource element in VODataService?

    A large part of the session was taken up by the question of persistent TAP uploads that I have covered here recently. I have summarised this in the session, and after that, people from ESAC (who have built their machinery on top of VOSpace) and CADC (who have inspired my implementation) gave their takes on the topic of persistent uploads. I'm trying hard to like ESAC's solution, because it is using the obvious VO standard for users to manage server-side resources (even though the screenshot in the slides,

    A cutout of a presentation slide showing a browser screenshot with a modal diaglog with a progress bar for an upload.

    suggests it's just a web page). But then it is an order of magnitude more complex in implementation than my proposal, and the main advantage would be that people can share their tables with other users. Is that a use case important enough to justify that significant effort?

    Then Pat's talk on CADC's perspective presented a hierarchy of use cases, which perhaps offers a way to reconcile most of the opinions: Is there is a point for having the same API on /tables and /user_tables, depending on whether we want the tables to be publicly visible?

    Data Curation and Preservation (Saturday, 11:15)

    This Interest Group's name sounds like something only a librarian could become agitated about: Data curation and preservation. Yawn.

    Fortunately, I am considering myself a librarian at heart, and hence I am participating in the DCP session now. In terms of engagement, we have already started to quarrel about a topic that must seem rather like bikeshedding from the outside: should we bake in the DOI resolver into the way we write DOIs (like http://doi.org/10.21938/puTViqDkMGcQZu8LSDZ5Sg; actually, since a few years: https instead of http?) or should we continue to use the doi URI scheme, as we do now: doi:10.21938/puTViqDkMGcQZu8LSDZ5Sg?

    This discussion came up because the doi foundation asks you to render DOIs in an actionable way, which some people understand as them asking people to write DOIs with their resolver baked in. Now, I am somewhat reluctant to do that mainly on grounds of user freedom. Sure, as long as you consider the whole identifier an opaque string, their resolver is not actually implied, but that's largely ficticious, as evinced by the fact that somehow identifiers with http and with https would generally be considered equivalent. I do claim that we should make it clear that alternative resolvers are totally an option. Including ours: RegTAP lets you resolve DOIs to ivoids and VOResource metadata, which to me sounds like something you might absolutely want to do.

    Another (similarly biased) point: Not everything on the internet is http. There are other identifier types that are resolvable (including ivoids). Fortunately, writing DOIs as HTTP URIs is not actually what the doi foundation is asking you to do. Thanks to Gus for clarifying that.

    These kinds of questions also turned up in the discussion after my talk on BibVO. Among other things, that draft standard proposes to deliver information on what datasets a paper used or produced in a very simple JSON format. That parsimony has been put into question, and in the end the question is: do we want to make our protocols a bit more complicated to enable interoperability with other “things”, probably from outside of astronomy? Me, I'm not sure in this case: I consider all of BibVO some sort of contract essentially between the IVOA and SciX (née ADS), and I doubt that someone else than SciX will even want to read this or has use for it.

    But then I (and others) have been wrong with preditions like this before.

    Registry (Saturday 14:30)

    Now it's registry time, which for me is always a special time; I have worked a lot on the Registry, and I still do.

    Given that, in Christophe's statistics talk, I was totally blown away by the number of authorities and registries from Germany, given how small GAVO is. Oh wow. In this graph of authorities in the VO we are the dark green slice far at the bottom of the pie:

    A presentation slide with two pie charts.  In the larger one, there are man small and a couple of large slices.  A dark green one makes up a bit less than 10%.

    I will give you that, as usual with metrics, to understand what they mean you have to know so much that you then don't need the metrics any more. But again there is an odd feeling of self-agency in that slide.

    The next talk, Robert Nikutta's announcement of generic publishing registry code, was – as already mentioned above – particularly good news for me, because it let me add something particularly straightforward into my overview of OAI-PMH servers for VO use, and many data providers (those unwise enough to not use DaCHS…) have asked for that.

    For the rest of the session I entertained folks with the upcoming RFC of VOResource 1.2 and the somewhat sad state of affairs in fulltext seaches in the VO. Hence, I was too busy to report on how gracefully the speaker made his points. Ahem.

    Semantics and Solar System (Saturday 16:30)

    Ha! A session in which I don't talk. That's even more remarkable because I'm the chair emeritus of the Semantics WG and the vice-chair of the Solar Systems IG at the moment.

    Nevertheless, my plan has been to sit back and relax. Except that some of Baptiste's proposals for the evolution of the IVOA voacabularies are rather controversial. I was therefore too busy to add to this post again.

    But at least there is hope to get rid of the ugly “(Obscure)” as the human-readable label of the geo_app reference frame that entered that vocabulary via VOTable; you see, this term was allowed in COOSYS/@system since VOTable 1.0, but when we wrote the vocabulary, nobody who reviewed it could remember what it meant. In this session, JJ finally remembered. Ha! This will be a VEP soon.

    It was also oddly gratifying to read this slide from Stéphane's talk on fetching data from PDS4:

    A presentation slide with bullet points complaining about missing metadata, inconsistent metadata, and other unpleasantries.

    Lists like these are rather characteristic in a data publisher's diary. Of course, I know that's true. But seeing it in public is still gives me a warm feeling of comradeship. Stéphane then went on to tell us how to make the cool 67P images in TOPCAT (I had already mentioned those above when I talked about the Exec report):

    A 3D-plot of an odd shape with colours indicating some physical quantity.

    Operations (Sunday 10.00)

    I am now in the session of the Operations IG, where Henrik is giving the usual VO Weather Report. VO weather reports discuss how many of our services are “valid” in the sense of “will work reasonably well with our clients“. As usual for these kinds of metrics, you need to know quite a bit to understand what's going on and how bad it is when a service is “not compliant”. In particular for the TAP stats, things look a lot bleaker than they actually are:

    A bar graph showing the temporal evolution of the number of TAP servers failing (red), just passing (yellow) or passing (green) validation over the past year or so.  Yellow is king.

    Green is “fully compliant”, yellow is “mostly compliant”, red is “not compliant”. For whatever that means.

    These assessments are based on stilts taplint, which is really fussy (and rightly so). In reality, you can usually use even the red services without noticing something is wrong. Except… if you are not doing things quite right yourself.

    That was the topic of my talk for Ops. It is another outcome of this summer semester's VO course, where students were regularly confused by diagnostics they got back. Of course, while on the learning curve, you will see more such messages than if you are a researcher who is just gently adapting some sample code. But anyway: Producing good error messages is both hard and important. Let me quote my faux quotes in the talk:

    Writing good error messages is great art: Do not claim more than you know, but state enough so users can guess how to fix it.

    —Demleitner's first observation on error messages

    Making a computer do the right thing for a good request usually is not easy. It is much harder to make it respond to a bad request with a good error message.

    —Demleitner's first corollary on error messages

    Later in the session there was much discussion about “denial of service attacks” that services occasionally face. For us, that does not seem to be malicious people in general, but people basically well-meaning but challenged to do the right thing (read documentation, figure out efficient ways to do what they want to do).

    For instance, while far below DoS, turnitin.com was for a while harvesting all VO registry records from some custom, HTML-rendering endpoint every few days, firing off 30'000 requests relatively expensive on my side (admittedly because I have implemented that particular endpoint in the most lazy fashion imaginable) in a rather short time. They could have done the same thing using OAI-PMH with a single request that, no top, would have taken up almost no CPU on my side. For the record, it seems someone at turnitin.com has seen the light; at least they don't do that mass harvesting any more for all I can tell (without actually checking the logs). Still, with a single computer, it is not hard to bring down your average VO server, even if you don't plan to.

    Operators that are going into “the cloud” (which is a thinly disguised euphemism for “volunatrily becoming hostages of amazon.com”) or that are severely “encouraged” to do that by their funding agencies have the additional problem in that for them, indiscriminate downloads might quickly become extremely costly on top. Hence, we were talking a bit about mitigations, from HTTP 429 status codes (”too many requests“) to going for various forms of authentication, in particular handing out API keys. Oh, sigh. It would really suck if people ended up needing to get and manage keys for all the major services. Perhaps we should have VO-wide API keys? I already have a plan for how we could pull that off…

    Winding down (Monday 7:30)

    The Interop concluded yesterday noon with reports from the working groups and another (short) one from the Exec chair. Phewy. It's been a straining week ever since ADASS' welcome reception almost exactly a week earlier.

    Reviewing what I have written here, I notice I have not even mentioned a topic that pervaded several sessions and many of the chats on the corridors: The P3T, which expands to “Protocol Transition Tiger Team”.

    This was an informal working group that was formed because some adopters of our standards felt that they (um: the standards) are showing their age, in particular because of the wide use of XML and because they do not always play well with “modern” (i.e., web browser-based) “security” techniques, which of course mostly gyrate around preventing cross-site information disclosure.

    I have to admit that I cannot get too hung up on both points; I think browser-based clients should be the exception rather than the norm in particular if you have secrets to keep, and many of the “modern” mitigations are little more than ugly hacks (“pre-flight check“) resulting from the abuse of a system designed to distribute information (the WWW) as an execution platform. But then this ship has sailed for now, and so I recognise that we may need to think a bit about some forms of XSS mitigations. I would still say we ought to find ways that don't blow up all the sane parts of the VO for that slightly insane one.

    On the format question, let me remark that XML is not only well-thought out (which is not surprising given its designers had the long history of SGML to learn from) but also here to stay; developers will have to handle XML regardless of what our protocols do. More to the point, it often seems to me that people who say “JSON is so much simpler” often mean “But it's so much simpler if my web page only talks to my backend”.

    Which is true, but that's because then you don't need to be interoperable and hence don't have to bother with metadata for other peoples' purposes. But that interoperability is what the IVOA is about. If you were to write the S-expressions that XML encodes at its base in JSON, it would be just as complex, just a bit more complicated because you would be lacking some of XML's goodies from CDATA sections to comments.

    Be that as it may, the P3T turned out to do something useful: It tried to write OpenAPI specifications for some of our protocols, and already because that smoked out some points I would consider misfeatures (case-insensitive parameter names for starters), that was certainly a worthwhile effort. That, as some people pointed out, you can generate code from OpenAPI is, I think, not terribly valuable: What code that generates probably shouldn't be written in the first place and rather be replaced by some declarative input (such as, cough, OpenAPI) to a program.

    But I will say that I expect OpenAPI specs to be a great help to validators, and possibly also to implementors because they give some implementation requirements in a fairly concise and standard form.

    In that sense: P3T was not a bad thing. Let's see what comes out of it now that, as Janet also reported in the closing session, the tiger is sleeping:

    A presentation slide with a sleeping tiger and the proclamation that ”We feel the P3T has done its job”.
    [1]“feels” as opposed to “has”, that is. I do still think that many people would be happy if they could say something like: “I'm interested in species A, B, and C at temperature T (and perhaps pressure p). Now let me zoom into a spectrum and show me lines from these species; make it so the lines don't crowd too much and select those that are plausibly the strongest with this physics.”
  • A Proposal for Persistent TAP Uploads

    From its beginning, the IVOA's Table Access Protocol TAP has let users upload their own tables into the services' databases, which is an important element of TAP's power (cf. our upload crossmatch use case for a minimal example). But these uploads only exist for the duration of the request. Having more persistent user-uploaded tables, however, has quite a few interesting applications.

    Inspired by Pat Dowler's 2018 Interop talk on youcat I have therefore written a simple implementation for persistent tables in GAVO's server package DaCHS. This post discusses what is implemented, what is clearly still missing, and how you can play with it.

    If all you care about is using this from Python, you can jump directly to a Jupyter notebook showing off the features; it by and large explains the same things as this blogpost, but using Python instead of curl and TOPCAT. Since pyVO does not know about the proposed extensions, the code necessarily is still a bit clunky in places, but if something like this will become more standard, working with persistent uploads will look a lot less like black art.

    Before I dive in: This is certainly not what will eventually become a standard in every detail. Do not do large implementations against what is discussed here unless you are prepared to throw away significant parts of what you write.

    Creating and Deleting Uploads

    Where Pat's 2018 proposal re-used the VOSI tables endpoint that every TAP service has, I have provisionally created a sibling resource user_tables – and I found that usual VOSI tables and the persistent uploads share virtually no server-side code, so for now this seems a smart thing to do. Let's see what client implementors think about it.

    What this means is that for a service with a base URL of http://dc.g-vo.org/tap[1], you would talk to (children of) http://dc.g-vo.org/tap/user_tables to operate the persistent tables.

    As with Pat's proposal, to create a persistent table, you do an http PUT to a suitably named child of user_tables:

    $ curl -o tmp.vot https://docs.g-vo.org/upload_for_regressiontest.vot
    $ curl -H "content-type: application/x-votable+xml" -T tmp.vot \
      http://dc.g-vo.org/tap/user_tables/my_upload
    Query this table as tap_user.my_upload
    

    The actual upload at this point returns a reasonably informative plain-text string, which feels a bit ad-hoc. Better ideas are welcome, in particular after careful research of the rules for 30x responses to PUT requests.

    Trying to create tables with names that will not work as ADQL regular table identifiers will fail with a DALI-style error. Try something like:

    $ curl -H "content-type: application/x-votable+xml" -T tmp.vot
      http://dc.g-vo.org/tap/user_tables/join
    ... <INFO name="QUERY_STATUS" value="ERROR">'join' cannot be used as an
      upload table name (which must be regular ADQL identifiers, in
      particular not ADQL reserved words).</INFO> ...
    

    After a successful upload, you can query the VOTable's content as tap_user.my_upload:

    A TOPCAT screenshot with a query 'select avg("3.6mag") as blue, avg("5.8mag") as red from tap_user.my_upload' that has a few red warnings, and a result window showing values for blue and red.

    TOPCAT (which is what painted these pixels) does not find the table metadata for tap_user tables (yet), as I do not include them in the “public“ VOSI tables. This is why you see the reddish syntax complaints here.

    I happen to believe there are many good reasons for why the volatile and quickly-changing user table metadata should not be mixed up with the public VOSI tables, which can be several 10s of megabytes (in the case of VizieR). You do not want to have to re-read that (or discard caches) just because of a table upload.

    If you have the table URL of a persistent upload, however, you inspect its metadata by GET-ting the table URL:

    $ curl http://dc.g-vo.org/tap/user_tables/my_upload | xmlstarlet fo
    <vtm:table [...]>
      <name>tap_user.my_upload</name>
      <column>
        <name>"_r"</name>
        <description>Distance from center (RAJ2000=274.465528, DEJ2000=-15.903352)</description>
        <unit>arcmin</unit>
        <ucd>pos.angDistance</ucd>
        <dataType xsi:type="vs:VOTableType">float</dataType>
        <flag>nullable</flag>
      </column>
      ...
    

    – this is a response as from VOSI tables for a single table. Once you are authenticated (see below), you can also retrieve a full list of tables from user_tables itself as a VOSI tableset. Enabling that for anonymous uploads did not seem wise to me.

    When done, you can remove the persistent table, which again follows Pat's proposal:

    $ curl -X DELETE http://dc.g-vo.org/tap/user_tables/my_upload
    Dropped user table my_upload
    

    And again, the text/plain response seems somewhat ad hoc, but in this case it is somewhat harder to imagine something less awkward than in the upload case.

    If you do not delete yourself, the server will garbage-collect the upload at some point. On my server, that's after seven days. DaCHS operators can configure that grace period on their services with the [ivoa]userTableDays setting.

    Authenticated Use

    Of course, as long as you do not authenticate, anyone can drop or overwrite your uploads. That may be acceptable in some situations, in particular given that anonymous users cannot browse their uploaded tables. But obviously, all this is intended to be used by authenticated users. DaCHS at this point can only do HTTP basic authentication with locally created accounts. If you want one in Heidelberg, let me know (and otherwise push for some sort of federated VO-wide authentication, but please do not push me).

    To just play around, you can use uptest as both username and password on my service. For instance:

      $ curl -H "content-type: application/x-votable+xml" -T tmp.vot \
      --user uptest:uptest \
      http://dc.g-vo.org/tap/user_tables/privtab
    Query this table as tap_user.privtab
    

    In recent TOPCATs, you would enter the credentials once you hit the Log In/Out button in the TAP client window. Then you can query your own private copy of the uploaded table:

    A TOPCAT screenshot with a query 'select avg("3.6mag") as blue, avg("5.8mag") as red from tap_user.my_upload' that has a few red warnings, and a result window showing values for blue and red; there is now a prominent Log In/Out-button showing we are logged in.

    There is a second way to create persistent tables (that would also work for anonymous): run a query and prepend it with CREATE TABLE. For instance:

    A TOPCAT screenshot with a query 'create table tap_user.smallgaia AS SELECT * FROM gaia.dr3lite TABLESAMPLE(0.001)'. Again, TOPCAT flags the create as an error, and there is a dialog "Table contained no rows".

    The “error message” about the empty table here is to be expected; since this is a TAP query, it stands to reason that some sort of table should come back for a successful request. Sending the entire newly created table back without solicitation seems a waste of resources, and so for now I am returning a “stub” VOTable without rows.

    As an authenticated user, you can also retrieve a full tableset for what user-uploaded tables you have:

    $ curl --user uptest:uptest http://dc.g-vo.org/tap/user_tables | xmlstarlet fo
    <vtm:tableset ...>
      <schema>
        <name>tap_user</name>
        <description>A schema containing users' uploads. ...  </description>
        <table>
          <name>tap_user.privtab</name>
          <column>
            <name>"_r"</name>
            <description>Distance from center (RAJ2000=274.465528, DEJ2000=-15.903352)</description>
            <unit>arcmin</unit>
            <ucd>pos.angDistance</ucd>
            <dataType xsi:type="vs:VOTableType">float</dataType>
            <flag>nullable</flag>
          </column>
          ...
        </table>
        <table>
          <name>tap_user.my_upload</name>
          <column>
            <name>"_r"</name>
            <description>Distance from center (RAJ2000=274.465528, DEJ2000=-15.903352)</description>
            <unit>arcmin</unit>
            <ucd>pos.angDistance</ucd>
            <dataType xsi:type="vs:VOTableType">float</dataType>
            <flag>nullable</flag>
          </column>
          ...
        </table>
      </schema>
    </vtm:tableset>
    

    Open Questions

    Apart from the obvious question whether any of this will gain community traction, there are a few obvious open points:

    1. Indexing. For tables of non-trivial sizes, one would like to give users an interface to say something like “create an index over ra and dec interpreted as spherical coordinates and cluster the table according to it”. Because this kind of thing can change runtimes by many orders of magnitude, enabling it is not just some optional embellishment.

      On the other hand, what I just wrote already suggests that even expressing the users' requests in a sufficiently flexible cross-platform way is going to be hard. Also, indexing can be a fairly slow operation, which means it will probably need some sort of UWS interface.

    2. Other people's tables. It is conceivable that people might want to share their persistent tables with other users. If we want to enable that, one would need some interface on which to define who should be able to read (write?) what table, some other interface on which users can find what tables have been shared with them, and finally some way to let query writers reference these tables (tap_user.<username>.<tablename> seems tricky since with federated auth, user names may be just about anything).

      Given all this, for now I doubt that this is a use case sufficiently important to make all the tough nuts delay a first version of user uploads.

    3. Deferring destruction. Right now, you can delete your table early, but you cannot tell my server that you would like to keep it for longer. I suppose POST-ing to a destruction child of the table resource in UWS style would be straightforward enough. But I'd rather wait whether the other lacunae require a completely different pattern before I will touch this; for now, I don't believe many persistent tables will remain in use beyond a few hours after their creation.

    4. Scaling. Right now, I am not streaming the upload, and several other implementation details limit the size of realistic user tables. Making things more robust (and perhaps scalable) hence will certainly be an issue. Until then I hope that the sort of table that worked for in-request uploads will be fine for persistent uploads, too.

    Implemented in DaCHS

    If you run a DaCHS-based data centre, you can let your users play with the stuff I have shown here already. Just upgrade to the 2.10.2 beta (you will need to enable the beta repo for that to happen) and then type the magic words:

    dachs imp //tap_user
    

    It is my intention that users cannot create tables in your DaCHS database server unless you say these words. And once you say dachs drop --system //tap_user, you are safe from their huge tables again. I would consider any other behaviour a bug – of which there are probably still quite a few. Which is why I am particularly grateful to all DaCHS operators that try persistent uploads now.

    [1]As already said in the notebook, if http bothers you, you can write https, too; but then it's much harder to watch what's going on using ngrep or friends.
  • GAVO at the AG-Tagung in Köln

    People standing an sitting around a booth-like table.  There's a big GAVO logo and a big screen on the left-hand side, a guy in a red hoodie is clearly giving a demo.

    As every year, GAVO participates in the fall meeting of the Astronomische Gesellschaft (AG), the association of astronomers working in Germany. This year, the meeting is hosted by the Universität zu Köln (a.k.a. University of Cologne), and I want to start with thanking them and the AG staff for placing our traditional booth smack next to a coffee break table. I anticipate with glee our opportunities to run our pitches on how much everyone is missing out if they're not doing VO while people are queueing up for coffee. Excellent.

    As every year, we are co-conveners for a splinter meeting on e-science the virtual observatory, where I will be giving a talk on global dataset discovery (you heard it here first; lecture notes for the talk) late on Thursday afternoon.

    And as every year, there is a puzzler, a little problem rather easily solvable using VO tools; I was delighted to see people apparently already waiting for it when I handed out the problem sheet during the welcome reception tonight. You are very welcome to try your hand on it, but you only get to enter our raffle if you are on site. This year, the prize is a towel (of course) featuring a great image from ESA's Mars Express mission, where Phobos floats in front of Mars' limb:

    A 2:1 landscape black-and-white image with a blackish irregular spheroid floating in front of a deep horizon.

    I will update this post with the hints we are going to give out during the coffee breaks tomorrow and on Wednesday. And I will post our solution here late on Thursday.

    At our booth, you will also find various propaganda material, mostly covering matters I have mentioned here before; for posteriority and remoteriority, let me link to PDFs of the flyers/posters I have made for this meeting (with re-usabilty in mind). To advertise the new VO lectures, I am asking Have you ever wished there was a proper introduction to using the Virtual Observatory? with lots of cool DOIs and perhaps less-cool QR codes. Another flyer trying to gain street cred with QR codes is the Follow us flyer advertising our Fediverse presence. We also still show a pitch for publishing with us and hand out the inevitable who we are flyer (which, I'll readily admit, has never been an easy sell).

    A fediverse screenshot and URIs for following us.

    Bonferroni for Open Data?

    A lot more feedback than on the QR code-heavy posters I got on a real classic that I have shown at many AG meetings since the 2013 Tübingen meeting: Lame excuses for not publishing data.

    A tricky piece of feedback on that was an excuse that may actually be a (marginally) valid criticism of open data in general. You see, in particular in astroparticle physics (where folks are usually particularly uptight with their data), people run elaborate statistics on their results, inspired by the sort of statistics they do in high energy physics (“this is a 5-sigma detection of the Higgs particle”). When you do this kind of thing, you do run into a problem when people run new “tests” against your data because of the way test theory works. If you are actually talking about significance levels, you would have to apply Bonferroni corrections (or worse) when you do new tests on old data.

    This is actually at least not untrue. If you do not account for the slight abuse of data and tests of this sort, the usual interpretation of the significance level – more or less the probablity that you will reject a true null hypothesis and thus claim a spurious result – breaks down, and you can no longer claim things like “aw, at my significance level of 0.05, I'll do spurious claims only one out of twenty times tops”.

    Is this something people opening their data would need to worry about when they do their original analysis? It seems obvious to me that that's not the case and it would actually be impossible to do, in particular given that there is no way to predict what people will do in the future. But then there are many non-obvious results in statistics going against at least my gut feelings.

    Mind you, this definitely does not apply to most astronomical research and data re-use I have seen. But the point did make me wonder whether we may actually need some more elaborate test theory for re-used open data. If you know about anything like that: please do let me know.

    Followup (2024-09-10)

    The first hint is out. It's “Try TOPCAT's TAP client to solve this puzzler; you may want to took for 2MASS XSC there.“ Oh, and we noticed that the problem was stated rather awkwardly in the original puzzler, which is why we have issued an erratum. The online version is fixed, it now says “where we define obscure as covered by a circle of four J-magnitude half-light radii around an extended object”.

    Followup (2024-09-10)

    After our first splinter – with lively discussions on the concept and viability of the “science-ready data” we have always had in mind as the primary sort of thing you would discover in the VO –, I have revealed the second hint: “TOPCAT's Examples button is always a good idea, in particular if you are not too proficient in ADQL. What you would need here is known as a Cone Selection.”

    Oh, in case you are curious where the discussion on the science-ready data gyrated to: Well, while the plan for supplying data usable without having to have reduction pipelines in place is a good one. However, there undoubtedly are cases in which transparent provenance and the ability to do one's own re-reductions enable important science. With datalink [I am linking to a 2015 poster on that written by me; don't read that spec just for fun], we have an important ingredient for that. But I give you that in particular the preservation of the software that makes up reduction pipelines is a hard problem. It may even be an impossible problem if “preservation” is supposed to encompass malleability and fixability.

    Followup (2024-09-11)

    I've given the last two hints today: “To find the column with the J half-light radius, it pays to sort the columns in the Columns tab in TOPCAT by name or, for experts using VizieR's version of the XSC, by UCD.” and “ADQL has aggregate functions, which let you avoid downloading a lot of data when all you need are summary properties. This may not matter with what little data you would transfer here, but still: use server-side SUM.”

    Followup (2024-09-12)

    I have published the (to me, physically surprising) puzzler solution to https://www.g-vo.org/puzzlerweb/puzzler2024-solution.pdf. In case it matters to you: The towel went to Marburg again. Congratulations to the winner!

    Followup (2024-09-13)

    On the way home I notice this might be a suitable place to say how I did the QR codes I was joking about above. Basis: The embedding documents are written in LaTeX, and I'm using make to build them. To include a QR code, I am writing something like:

    \includegraphics[height=5cm]{vo-qr.png}}
    

    in the LaTeX source, and I am declaring a dependency on that file in the makefile:

    fluggi.pdf: fluggi.tex vo-qr.png <and possibly more images>
    

    Of course, this will error out because there is no file vo-qr.png at that point. The plan is to programatically generate it from a file containing the URL (or whatever you want to put into the QR code), named, in this case, vo.url (that is, whatever is in front of -qr.png in the image name). In this case, this has:

    https://doi.org/10.21938/avVAxDlGOiu0Byv7NOZCsQ
    

    The automatic image generation then is effected by a pattern rule in the makefile:

    %-qr.png: %.url
            python qrmake.py $<
    

    And then all it takes is a short script qrmake.py, which based on python3-qrcode:

    import sys
    import qrcode
    
    with open(sys.argv[1], "rb") as f:
            content = f.read().strip()
    output_code = qrcode.QRCode(border=0)
    output_code.add_data(content)
    
    dest_name = sys.argv[1].replace(".url", "")+"-qr.png"
    output_code.make_image().save(dest_name)
    

Page 1 / 21 »