The "stars" application is code that addresses the following problem domain: A region of the sky is digitized at two different wavelengths, yielding two sets of candidate bright objects, each characterised by a position (x,y) and width (sigma). The problem is to find bright objects at the same positions in both sets, consistent with the width of each. The schema used in the application specifies "star" objects with the data members xcentre,ycentre, sigma (width), catalogue (identification number) with associated member functions that return the position, the sigma, the catalogue number, the proximity of a point (X,Y) to the "star", and so on.
The application is in two parts: the first part generates a randomly-distributed set of stars in each of two databases in an Objectivity federated database. The second part attempts to match the positions of each star in the first database with each star in the second database, in order to find the star in the second database that is most close to the star in the first. We expect the matching time to scale as N2, where N is the number of stars in each database.
This application, although not taken from High Energy Physics, is analogous to matching energy deposits in a calorimeter with track impact positions, which is a typical event reconstruction task. The application has the advantage that it is small, and easy to port from one OS to another, and from one ODBMS to another
Using the "stars" application, we measured matching speed as a function of
number of objects in each database. The results showed that
the fastest matching times are obtained by using an index on the positions
of the star objects, and the slowest times with a text-based "predicate"
We then measured matching speeds on different hardware and operating systems,
comparing the performance of the matching using indices on the Exemplar, a
Pentium II (266 MHz), a Pentium Pro (200 MHz) and an HP 755 workstation.
For all these tests, we ensured that both stars databases were completely
contained in the system cache, so that we could disregard effects due to
disk access speeds on the various machines. The results demonstrated the
platform independence of the ODBMS and application software,
and illustrates the performance differences due to the speeds of the CPUs,
the code generated by the C++ compilers, and so on.
Another test showed how the application and database could reside on
different systems, and what impact on performance there was if they did:
we measured the matching speed on a 266 MHz Pentium II PC with
local databases, and with databases stored on a remote HP 755 workstation.
For the problem sizes we were using, there was no significant performance
degradation when the data were held remotely from the client application.
The Objectivity/DB cache is used to store one or more pages of the
database(s) in memory, so improving performance for queries that access
objects contained in cached pages. The size of the cache can be configured
in the application code. We measured the behaviour of the
"stars" application performance with differing cache sizes when matching 2000
star objects. In this test we observed some erratic behaviour
when using very small caches. However, the overall results showed that,
for the databases being used, the objects were all accomodated in the default sized cache,
and no benefit was obtained by increasing it.