Articles from Operations

  • Parallel Queries

    Image: Plot of run times

    An experiment with parallel querying of PPMX, going from single-threaded execution to using seven workers.

    Let me start this post with a TL;DR for

    scientists:Large analysis queries (like those that contain a GROUP BY clause) profit a lot from parallel execution, and you needn't do a thing for that.
    DaCHS operators:
     When you have large tables, Postgres 11 together with the next DaCHS release may speed up your responses quite dramatically in some cases.

    So, here's the story –

    I've finally overcome my stretch trauma and upgraded the Heidelberg data center's database server to Debian buster. With that, I got Postgres 11, and I finally bothered to look into what it takes to enable parallel execution of database queries.

    Turns out: My Postgres started to do parallel execution right away, but just in case, I went for the following lines in postgresql.conf:

    max_parallel_workers_per_gather = 4
    max_worker_processes = 10
    max_parallel_workers = 10
    

    Don't quote me on this – I frankly admit I haven't really developed a feeling for the consequences of max_parallel_workers_per_gather and instead just did some experiments while the box was loaded otherwise, determining where raising that number has a diminishing return (see below for more on this).

    The max_worker_processes thing, on the other hand, is an educated guess: on my data center, there's essentially never more than one person at a time who's running “interesting”, long-running queries (i.e., async), and that person should get the majority of the execution units (the box has 8 physical CPUs that look like 16 cores due to hyperthreading) because all other operations are just peanuts in comparison. I'll gladly accept advice to the effect that that guess isn't that educated after all.

    Of course, that wasn't nearly enough. You see, since TAP queries can return rather large result sets – on the GAVO data center, the match limit is 16 million rows, which for a moderate row size of 2 kB already translates to 32 GB of memory use if pulled in at once, half the physical memory of that box –, DaCHS uses cursors (if you're a psycopg2 person: named cursors) to stream results and write them out to disk as they come in.

    Sadly, postgres won't do parallel plans if it thinks people will discard a large part of the result anyway, and it thinks that if you're coming through a cursor. So, in SVN revision 7370 of DaCHS (and I'm not sure if I'll release that in this form), I'm introducing a horrible hack that, right now, just checks if there's a literal “group” in the query and doesn't use a cursor if so. The logic is, roughly: With GROUP, the result set probably isn't all that large, so streaming isn't that important. At the same time, this type of query is probably going to profit from parallel execution much more than your boring sequential scan.

    This gives rather impressive speed gains. Consider this example (of course, it's selected to be extreme):

    import contextlib
    import pyvo
    import time
    
    @contextlib.contextmanager
    def timeit(activity):
      start_time = time.time()
      yield
      end_time = time.time()
      print("Time spent on {}: {} s".format(activity, end_time-start_time))
    
    
    svc = pyvo.tap.TAPService("http://dc.g-vo.org/tap")
    with timeit("Cold (?) run"):
      svc.run_sync("select round(Rmag) as bin, count(*) as n"
        " from ppmx.data group by bin")
    with timeit("Warm run"):
      svc.run_sync("select round(Rmag) as bin, count(*) as n"
        " from ppmx.data group by bin")
    

    (if you run it yourself and you get warnings about VOTable versions from astropy, ignore them; I'm right and astropy is wrong).

    Before enabling parallel execution, this was 14.5 seconds on a warm run, after, it was 2.5 seconds. That's an almost than a 6-fold speedup. Nice!

    Indeed, that holds beyond toy examples. The showcase Gaia density plot:

    SELECT
            count(*) AS obs,
            source_id/140737488355328 AS hpx
    FROM gaia.dr2light
    GROUP BY hpx
    

    (the long odd number is 235416-6, which turns source_ids into level 6-HEALPixes as per Gaia footnote id; please note that Postgres right now isn't smart enough to parallelise ivo_healpix), which traditionally ran for about an hour is now done in less than 10 minutes.

    In case you'd like to try things out on your postgres, here's what I've done to establish the max_parallel_workers_per_gather value above.

    1. Find a table with a few 1e7 rows. Think of a query that will return a small result set in order to not confuse the measurements by excessive client I/O. In my case, that's a magnitude histogram, and the query would be:

      select round(Rmag) as bin, count(*) as n from ppmx.data group by bin;

      Run this query once so the data is in the disk cache (the query is “warm”).

    2. Establish a non-parallel baseline. That's easy to do:

      set max_parallel_workers_per_gather=0;
      
    3. Then run:

      explain analyze select round(Rmag) as bin, count(*) as n from ppmx.data group by bin;
      

      You should see a simple query plan with the runtime for the non-parallel execution – in my case, a bit more than 12 seconds.

    4. Then raise the number of max_parallel_workers_per_gatherer successively. Make sure the query plan has lines of the form “Workers Planned” or so. You should see that the execution time falls with the number of workers you give it, up to the value of max_worker_processes – or until postgres decides your table is too small to warrant further parallelisation, which for my settings happened at 7.

    Note, though, that in realistic, more complex queries, there will probably be multiple operations that will profit from parallelisation in a single query. So, if in this trivial example you can go to 15 gatherers and still see an improvement, this could actually make things slower for complex queries. But as I said above: I have no instinct yet for how things will actually work out. If you have experiences to share: I'm sure I'm not the only person on dachs-users who't be interested.

  • A New View on SSAP in DaCHS

    When I started working on the VO in 2007, my collagues in Garching already had a software that implemented major parts of the simple spectral access protocol (SSAP) that was being developed back then. It would publish spectra in the FITS format by just blindly dumping all header cards into a database table and then defining a view over that “raw” metadata table to make the whole thing match SSAP's expectations for how the output table should look like. Sometimes you could just map through a header to an SSA column, sometimes you would just convert a unit, sometimes you would have to write a fairly complex SQL expressions combining multiple fields.

    Back then, I didn't like it – why have two things (a table and a view) that can break when one (just a table in SSA's format) would do, too? Also, SSAP has about 50 metadata fields, but lets you put constant values into VOTable PARAMs, which seemed a very reasonable way to attain more compact responses. So, when DaCHS grew SSAP support, I defined a mixin (essentially, a configurable interface definition) that let operators define SSA tables and their constant parameters in a fairly simple fashion and directly produced a table you could base your SSAP service on.

    That made assumptions about which pieces of metadata are constant and which are not; for instance, the original mixin (“hcd” for “homogeneous collection”) assumed all spectra in a data collection came from the same instrument and had the same resolution and (what was I thinking?) SNR. Unsurprisingly, that broke fairly soon. So, I added a second mixin (“mixc”) for when different instruments or codes produced the data.

    But even that was headache, at the latest when I started making time series services using SSAP. And I had to fix a few bugs in the mixins themselves in the meantime, which mostly required re-imports of the data in that design. Such re-imports are non-trivial when you have millions of spectra, and they need to happen at software upgrade time or the services would break with the upgrade. Ouch.

    It was about mid-2018 when it dawned on me that sometimes it's better to have two things that can break even if one would do, after all. Specifically, if fixing the one thing is expensive, it's an excellent idea to put a facade on top of it that's cheap to change and can already be used to repair most deficiencies. Why re-build the house if a paint job does the trick?

    As to having more compact query responses when you stuff metadata that's constant in all the rows into VOTable PARAMS – well, in the age of web pages pulling in a megabyte of javascript and two megabytes of images to display five lines of text, I've become a bit cavalier in that department. Sure, the average row may have grown by a factor of three, but we're still talking only a few megabyte even with large responses. To me, these extra bytes seem a fair price to pay for the increased flexibility and overall more straightforward architecture.

    So, I've now come up with a view-based solution in DaCHS, too: the //ssap#view mixin. This is a bit less radical than the Garching software of 2007, as it doesn't dump raw headers but instead lets you do the primary transformations in the RD. But it no longer constrains what pieces of metadata should be constant and which may vary between spectra, and it uses the same names for the same pieces of metadata throughout (which also is a step forward over the old SSAP mixins).

    With this, DaCHS operators should no longer use the hcd and mixc mixins for new services. The new technique is already reflected the respective tutorial chapter, and the SSAP template (you're using dachs start, aren't you?) now uses it, too.

    If you have a spectra publishing project in your pipeline, this would be the perfect time to upgrade to the DaCHS 1.2.4 beta, which has the new mixin. It would be great if we could iron out remaining wrinkles before the next release makes changes a load on my conscience.

    As to migrating existing SSAP services: Well, it would be great if I could drop the old mixins in a couple of years, as they cause quite a bit of uglyness in DaCHS's built-in //ssap RD. But the migration regrettably isn't straightforward, so you may want to wait a bit before embarking on that journey (I'll be happy to help, though).

  • HTTPS in DaCHS

    Browser windows with and without HTTPS.

    Another little aspect of HTTPS support in DaCHS: In the web interface, the webSAMP button must disappear in pages served through HTTPS: it simply wouldn't work.

    (Warning: No astronomy-relevant content at all this time).

    I can't say I'm a big fan of the mighty push towards HTTPS that's going on right now – as I'm arguing in the updated operator's guide it doesn't do people's privacy a lot of good (compared to, say, pushing for browsers to not execute Javascript by default or have DNSSEC widely deployed), but it's a fairly substantial operational liability. With HTTPS, operators have to deal with cryptographic material, regularly update their certificates, restart their services in time and assemble the whole thing correctly (don't get me started about proxying, SNI, and all those horrors). Users, on the other hand, have to keep their CA certificates in order, in particular when they do programmatic VO access, where the browser vendors, their employers and who knows who else doesn't do it for them. Pop quiz: How would you install a new CA certificate on your box? And will your default browser see it?

    But on the other hand, there are some scenarios in which HTTPS makes sense, and I can remotely fantasise that some of those may even be relevant to the VO. And people have been asking for HTTPS in DaCHS a number of times, at times even because their administrations urged them to switch. So, here it is, hopefully. Turning it on is reasonably easy when you use Letsencrypt (which in particular entails having ports 80 and 443); the section on Letencrypt in the operator's guide tells what to do. In particular don't forget the cron job, because without it, things would break after three months (when the initial certificate expires).

    Things get difficult after that. For one, if your box is known under several names (our data center, for instance, can be reached as any of dc.g-vo.org, vo.uni-hd.de, and dc.zah.uni-heidelberg.de; this of course also includes things like www.example.org and example.org), you'll now have to tell DaCHS about it in the new [web]alternateHostnames configuration item; for instance, we have:

    [web]
    serverURL: http://dc.zah.uni-heidelberg.de
    alternateHostnames:dc.g-vo.org, vo.uni-hd.de
    

    in our /etc/gavo.rc.

    And then the Registry has to know you have https. There's actually no convention for that in the VO yet. But since I'd really like to have at least fallback interfaces with plain HTTP, we'll have to come up with something. For now, my plan is to have the alternative protocol (i.e., HTTPS for sites that have an HTTP-serverURL and vice versa) using the brand-new VOResource 1.1 mirrorURLs (in RegTAP 1.1, they are in the mirror_url column rr.interface). To make DaCHS declare the alternate URLs, set [web]registerAlternative to True.

    Another change I've introduced for HTTPS is that the default HTML template for the form renderer (i.e., the one people use who come with a browser) now suppresses the SAMP button if the request came in through HTTPS; that's because WebSAMP doesn't work with HTTPS and probably never will – at least I can't see a way to make it happen without totally wrecking what security guarantees HTTPS gives.

    All this doesn't yet cater for the case when you use a reverse proxy to terminate HTTPS. If you are in that situation, please talk to me so we can figure out a sane way for you explain to DaCHS what to tell the Registry.

    Anyway, if you want to try things out, just switch to the beta repostitory and upgrade. Feedback is highly welcome.

    Oh, and if you're a client developer: Our data center is now reachable through HTTPS (at https://dc.g-vo.org), and we already have pushed the records with mirrorURLs declaring HTTPS support to the RegTAP service at dc.g-vo.org (the others will have to wait a bit longer, as we haven't re-published our registry records yet (it's all experimental, after all).

  • Horror vacui begone

    browser and editor

    Mikhail's qrdcreator in a browser and an editor with a dachs start-produced template.

    One of the major usability issues our publishing suite DaCHS has for operators (i.e., people who want publish data) is the “horror vacui”: How do I start a Resource Descriptor (RD – the file DaCHS interprets to build services)?

    I used to recommend to start by having a look at the RDs of our existing services and pick whatever matches best your publication project. But finding a matching service and figuring out what is generic, what's a special property of the concrete data collection, and what's a hack that should not be reproduced isn't straightforward at all, not to mention the fact that some of those RDs have been in maintenance mode for almost 10 years and hence may show deprecated practices.

    Then came the the VESPA implementation workshop last year, during which Mikhail Minin showed me a piece of javascript and HTML (source on github) he has written to overcome the empty editor window. Essentially, Mikhail has built a fairly comprehensive form interface in a web browser that asks people the right questions to eventually write an RD for EPN-TAP (i.e., solar system) resources.

    I had planned to generalise Mikhail's approach to several types of resources supported by DaCHS, ideally inferring the questions to ask from the built-in documentation of mixins and applys. But during the last year, whenever I felt it would be a good time to tackle that generalisation, I quickly gave up again. It was mostly rather trivial stuff such as how to tell apart repeatable metadata (waveband, say) and non-repeatable metadata (instrument, say). But it was bad enough that I quickly found something else to do each time I got started.

    Eventually, I gave up on a menu interface altogether – making it flexible and generatable at the same time seemed a fairly complex problem. But that doesn't mean I forgot about overcoming the horror vacui thing. So, when forms aren't flexible enough for data entry, where do you turn? Right! A text editor.

    Enter dachs start. That's a new DaCHS subcommand that gets you started with your RD. For one, you can list the templates available:

    $ dachs start list
    siap -- Image collections via SIAP1 and TAP
    ssap+datalink -- Spectra via SSAP and TAP, going through datalink
    epntap -- Solar system data via EPN-TAP 2.0
    scs -- Catalogs via SCS and TAP
    

    More templates are planned; siap+datalink, for instance, would cover some frequent use cases. Feel free to mail in requests.

    Once you find a suitable template, create your future resource directory, enter it and run dachs start again, this time passing the name of the template you want:

    $ mkdir ex_data
    $ cd ex_data
    $ dachs start scs
    $ head -16 q.rd | tail -9
    <resource schema="ex_data">
      <meta name="creationDate">2018-04-13T12:34:31Z</meta>
    
      <meta name="title">%title -- not more than a line%</meta>
      <meta name="description">
        %this should be a paragraph or two (take care to mention salient terms)%
      </meta>
      <!-- Take keywords from
        http://astrothesaurus.org/thesaurus/hierarchical-browse/
    

    dachs start uses the directory name as the new schema name and then writes a file q.rd (which is the canonical name for the “main” RD in a resource). Within this file, you'll see things to fill out between pairs of percent signs with short explanantions. Where longer explanations are necessary, embedded comments should help.

    To give you an idea of the intended use: As a vim user, I've put

    augroup rd
      au!
      au BufRead,BufNewFile *.rd imap  /%[^%]*%a
      au BufRead,BufNewFile *.rd imap  cf%
    augroup END
    

    into my ~/.vimrc. That way, while editing the template into an actual RD, hitting F8 takes me to the next thing to be edited; I can then read the instructions, and when I have made up my mind, I can either delete the template element or hit F9 and replace the explanation text with whatever belongs there.

    The command is available starting with the 1.1.3 beta (available now by switching to the beta repo) and will be part of the 1.2 release, planned for early June after the Victoria interop.

    If you have a publication project: just try it out and give feedback. Note that the templates haven't actually been tested yet, and the comments were written by a DaCHS and VO nerd, so they might not always be great either. Thus, when you get stuck: complain early, complain often!

  • Heidelberg Data Center Down^WUp again

    Well, it has happened – perhaps it was the strain of restoring a couple of terabyte of data (as reported yesterday), perhaps it's uncorrelated, but our main database server's RAID threw errors and then disappeared from the SCSI bus today at about 15:03 UTC.

    This means that all services from http://dc.g-vo.org are broken for the moment. We're sorry, and we will try to at least limp on as fast as possible.

    Update (2017-11-13, 14:30 UTC): Well, it's official. What's broken is the lousy Adaptec controller – whatever configuration we tried, it can't talk to its backplane any more. Worse, we don't have a spare part for that piece here. We're trying to get one as quickly as possible, but even medium-sized shops don't have multi-channel SAS controllers in stock, so it'll have to be express mail.

    Of course, the results of the weekend's restore are lost; so, we'll need about 24 hours of restore again to get up to 90% of the services after the box is back up, with large tables being restored after that. Again, we're unhappy about the long downtime, but it could only have been averted by having a hot spare, which for this kind of infrastructure just wouldn't have been justifiable over the last ten years.

    Another lesson learned: Hardware RAID sucks. It was really hard to analyse the failure, and the messages of the controller BIOS were completely unhelpful. We, at least, will migrate to JBOD (one of the cool IT acronyms with a laid-back expansion: Just a Bunch Of Disks) and software RAID.

    And you know what? At least the box had two power supplies. If these weren't redundant, you bet the power supply would have failed.

    To give you an idea how bad things are, here is the open server with the controller card that probably caused the mayhem (left), and 12 TB of fast disk, yearning for action (right).

    A database server in pieces

    Update (2017-11-14, 12:21 UTC): We're cursed. The UPS guys with the new controller were in the main institute building. They claimed they couldn't find anyone. Ok, our janitor is on sick leave, and it was lunch break, but still. It can't be that hard to see walk up a single flight of steps. Do we really have to wait another day?

    Update (2017-11-14, 14:19 UTC): Well, UPS must have read this – or the original delivery report was bogus. Anyway, not an hour after the last entry the delivery status changed to "delivered", and there the thing was in our mailbox.

    Except – it wasn't the controller in the first place. It turned out that, in fact, four disks had failed at the same time. It's hard to believe but that's what it is. Seems we'll have to step carefully until the disks are replaced. We'll run a thorough check tonight while we prepare the database tables.

    Unless more disaster strikes, we should be back by tomorrow morning CET – but without the big tables, and I'm not sure yet whether I dare putting them in on these flimsy, enterprise-class, 15k, SAS disks. Well, I give you they've run for five years now.

    Update (2017-11-15, 14:37 UTC): After a bit more consideration, I figured I wouldn't trust the aging enterprise disks any more. Our admins then gave me a virtual machine on one of their boxes that should be powerful enough to keep the data center afloat for a while. So, the data center is back up at 90% (counting by the number of regression tests still failing) since an hour ago or so.

    Again, the big tables are missing (and a few obscure services the RDs of which showed bitrot and need polishing); they should come in over the next days, one by one; provided the VM isn't much slower than our DB server, you should see about two of them come in per day, with my planned sequence being hsoy, ppmxl, gps1, gaia, 2mass, sdssdr7, urat1, wise, ucac5, ucac4, rosat, ucac3, mwsc, mwsc-e14a, usnob, supercosmos.

    Feel free to vote tables up if you severely miss a table.

    And all this assumes no further disaster strikes...

    Update (2017-11-16, 9:22 UTC): Well, it ain't pretty. The first large catalog, HSOY, is finally in, and the CLUSTER operation ((which dominates restore time) took almost 12 hours; and HSOY, at 0.5 Gigarecord, isn't all that large. So, our replacement machine really is a good deal slower than our normal database server that did that operation in less than three hours. I guess you'll want to do your large-table queries on a different service for the next couple of weeks. Use the Registry!

    Update (2017-11-20, 9:05 UTC): With a bit more RAM (DaCHS operators: version 1.1 will have a new configuration item for indexing work memory!), things have been going faster over the weekend. We're now down to 15 regression tests failing (of 330), with just 4 large catalogs missing still, and then a few nitty-gritty, almost invisible tables still needing some manual work.

    Update (2017-11-23, 14:51 UTC): Only 10 regression tests are still failing, but progress has become slow again – the machine has been clustering supercosmos.data for the last 36 hours now; it's not that huge a table, so it's a bit hard to understand why this table is holding up things so much. On the plus side, new SSDs for our database server are being shipped, so we should see faster operation soon.

    Update (2017-12-01, 13:05 UTC): We've just switched back the database server back to our own server with its fresh SSDs. A few esoteric big tables are yet missing, but we'd say the crisis is over. Hence, that's the last update. Thank you for your attention.

« Page 2 / 3 »