Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 21 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
21
Dung lượng
227,15 KB
Nội dung
41
eDiamond: a Grid-enabled
federated database of annotated
mammograms
Michael Brady,
1
David Gavaghan,
2
Andrew Simpson,
3
Miguel Mulet Parada,
3
and Ralph Highnam
3
1
Oxford University, Oxford, United Kingdom,
2
Computing Laboratory, Oxford, United
Kingdom,
3
Oxford Centre for Innovation, Oxford, United Kingdom
41.1 INTRODUCTION
This chapter introduces a project named eDiamond, which aims to develop a Grid-enabled
federated database of annotated mammograms, built at a number of sites (initially in the
United Kingdom), and which ensures database consistency and reliable image processing.
A key feature of eDiamond is that images are ‘standardised’ prior to storage. Section 41.3
describes what this means, and why it is a fundamental requirement for numerous grid
applications, particularly in medical image analysis, and especially in mammography. The
eDiamond database will be developed with two particular applications in mind: teach-
ing and supporting diagnosis. There are several other applications for such a database,
as Section 41.4 discusses, which are the subject of related projects. The remainder of
Grid Computing – Making the Global Infrastructure a Reality. Edited by F. Berman, A. Hey and G. Fox
2003 John Wiley & Sons, Ltd ISBN: 0-470-85319-0
924 MICHAEL BRADY ET AL.
this section discusses the ways in which information technology (IT) is impacting on
the provision of health care – a subject that in Europe is called Healthcare Informatics.
Section 41.2 outlines some of the issues concerning medical images, and then Section 41.3
describes mammography as an important special case. Section 41.4 is concerned with
medical image databases, as a prelude to the description in Section 41.5 of the eDia-
mond e-Science project. Section 41.6 relates the eDiamond project to a number of other
efforts currently under way, most notably the US NDMA project. Finally, we draw some
conclusions in Section 41.7.
All Western societies are confronting similar problems in providing effective healthcare
at an affordable cost, particularly as the baby boomer generation nears retirement, as the
cost of litigation spirals, and as there is a continuing surge of developments in often
expensive pharmaceuticals and medical technologies. Interestingly, IT is now regarded as
the key to meeting this challenge, unlike the situation as little as a decade ago when IT
was regarded as a part of the problem. Of course, some of the reasons for this change in
attitude to IT are generic, rather than being specific to healthcare:
• The massive and continuing increase in the power of affordable computing, and the con-
sequent widespread use of PCs in the home, so that much of the population now regard
computers and the Internet as aspects of modern living that are equally indispensable
as owning a car or a telephone;
• The miniaturisation of electronics, which have made computing devices ubiquitous, in
phones and personal organisers;
• The rapid deployment of high-bandwidth communications, key for transmitting large
images and other patient data between centres quickly;
• The development of the global network, increasingly transitioning from the Internet to
the Grid; and
• The design of methodologies that enable large, robust software systems to be developed,
maintained and updated.
In addition, there are a number of factors that contribute to the changed attitude to IT
which are specific to healthcare:
• The increasing number of implementations of hospital information systems, including
electronic medical records;
• The rapid uptake of Picture Archiving and Communication Systems (PACS) which
enable images and signals to be communicated and accessed at high bandwidth around
a hospital, enabling clinicians to store images and signals in databases and then to view
them at whichever networked workstation that is most appropriate;
• Growing evidence that advanced decision support systems can have a dramatic impact
on the consistency and quality of care;
• Novel imaging and signalling systems (see Section 41.2), which provide new ways to
see inside the body, and to monitor disease processes non-invasively;
• Miniaturisation of mechatronic systems, which enable minimally invasive surgery, and
which in turn benefits the patient by reducing recovery time and the risk of complica-
tions, at the same time massively driving down costs for the health service provider;
eDIAMOND: A GRID-ENABLED FEDERATED DATABASE OF ANNOTATED MAMMOGRAMS 925
• Digitisation of information, which means that the sites at which signals, images and
other patient data are generated, analysed, and stored need not be the same, as increas-
ingly they are not;
1
and, by no means least;
• The increased familiarity with, and utilisation of, PCs by clinicians. As little as five
years ago, few consultant physicians would use a PC in their normal workflow, now
almost all do.
Governments have recognised these benefits and have launched a succession of ini-
tiatives, for example, the UK Government’s widely publicised commitment to elec-
tronic delivery of healthcare by 2008, and its National Cancer Plan, in which IT fea-
tures strongly.
However, these technological developments have also highlighted a number of major
challenges. First, the increasing range of imaging modalities allied to fear of litigation,
2
mean that clinicians are drowning in data. We return to this point in Section 41.2. Second,
in some areas of medicine – most notably mammography – there are far fewer skilled
clinicians than there is a need for. As we point out in Section 41.3, this offers an opportu-
nity for the Grid to contribute significantly to developing teleradiology in order to allow
the geographic separation of the skilled clinician from his/her less-skilled colleague and
that clinician’s patient whilst improving diagnostic capability.
41.2 MEDICAL IMAGES
R
¨
ontgen’s discovery of X rays in the last decade of the Nineteenth Century was the first
of a continuing stream of technologies that enabled clinicians to see inside the body,
without first opening the body up. Since bones are calcium-rich, and since calcium atten-
uates X rays about 26 times more strongly than soft tissues, X-radiographs were quickly
used to reveal the skeleton, in particular, to show fractures. X rays are normally used
in transmission mode – the two-dimensional spatial distribution is recorded for a given
(known) source flux. A variety of reconstruction techniques, for example, based on the
Radon transform, have been developed to combine a series of two-dimensional projection
images taken from different directions (normally on a circular orbit) to form a three-
dimensional ‘tomographic’ volume. Computed Tomography (CT) is nowadays one of the
tools most widely used in medicine. Of course, X rays are intrinsically ionising radiation,
so in many applications the energy has to be very carefully controlled, kept as low as
possible, and passed through the body for as short a time as possible, with the inevitable
result that the signal-to-noise (SNR) of the image/volume is greatly reduced. X rays of
the appropriate energies were used increasingly from the 1930s to reveal the properties
of soft tissues, and from the 1960s onwards to discover small, non-palpable tumours for
which the prognosis is very good. This is most highly developed for mammography, to
1
This technological change, together with the spread of PACS systems, has provoked turf battles between different groups of
medical specialists as to who ‘owns’ the patient at which stage of diagnosis and treatment. The emergence of the Grid will
further this restructuring.
2
It is estimated that fully 12% of malpractice suits filed in the USA concern mammography, with radiologists overwhelmingly
heading the ‘league table’ of clinical specialties that are sued.
926 MICHAEL BRADY ET AL.
which we return in the next section; but it remains the case that X rays are inappropriate
for distinguishing many important classes of soft tissues, for example, white and grey
matter in the brain.
The most exquisite images of soft tissue are currently produced using magnetic res-
onance imaging (MRI), see Westbrook and Kaut [1] for a good introduction to MRI.
However, to date, no pulse sequence is capable of distinguishing cancerous tissue from
normal tissue, except when using a contrast agent such as the paramagnetic chelate of
Gadolinium, gadopentetate dimeglumine, abbreviated as DTPA. In contrast-enhanced MRI
to detect breast cancer, the patient lies on her front with the breasts pendulous in a spe-
cial radio frequency (RF) receiver coil; one or more image volumes are taken prior
to bolus injection of DTPA and then image volumes are taken as fast as possible, for
up to ten minutes. In a typical clinical setting, this generates 12 image volumes, each
comprising 24 slice images, each
256 × 256 pixels, a total of 18 MB per patient per
visit. This is not large by medical imaging standards, certainly it is small compared
to mammography. Contrast-enhanced MRI is important for detecting cancer because
it highlights the neoangeogenesis, a tangled mass of millions of micron-thick leaky
blood vessels, grown by a tumour to feed its growth. This is essentially physiolog-
ical – functional – rather than anatomical – information [2]. Nuclear medicine modali-
ties such as positron-emission tomography (PET) and single photon emission computed
tomography (SPECT) currently have the highest sensitivity and specificity for cancer,
though PET remains relatively scarce, because of the associated capital and recurrent
costs, not least of which involve a cyclotron to produce the necessary quantities of radio-
pharmaceuticals.
Finally, in this very brief tour (see [3, 4] for more details about medical imaging), ultra-
sound image analysis has seen major developments over the past decade, with Doppler,
second harmonic, contrast agents, three-dimensional probes, and so on; but image quality,
particularly for cancer, remains sufficiently poor to offset its price advantages.
Generally, medical images are large and depict anatomical and pathophysiological
information of staggering variety both within a single image and across a population of
images. Worse, it is usually the case that clinically significant information is quite subtle.
For example, Figure 41.1 shows a particularly straightforward example of a mammogram.
Microcalcifications, the small white spots shown in Figure 41.1, are deposits of calcium
or magnesium salts that are smaller than 1 mm. Clusters of microcalcifications are often the
earliest sign of non-palpable breast cancer, though it must be stressed that benign clusters
are often found, and that many small white dots do not correspond to microcalcifications
(see Highnam and Brady [5] for an introduction to the physics of mammography and to
microcalcifications). In order to retain the microcalcifications that a skilled radiologist can
detect, it is usual to digitise mammograms to a resolution of 50 to 100
µ.Ithasbeen
found that the densities in a mammogram need to be digitised to a resolution of 14 to
16 bits, yielding 2 bytes per pixel. An A4-sized mammogram digitised at the appropriate
resolution gives an image that is typically
4000 × 4000 pixels, that is 32 MB. Generally,
two views – craniocaudal (CC, head to toe) and mediolateral oblique (MLO, shoulder
to opposite hip) – are taken of each of the breasts, giving 128 MB per patient per visit,
approximately an order of magnitude greater than that from a contrast-enhanced MRI
eDIAMOND: A GRID-ENABLED FEDERATED DATABASE OF ANNOTATED MAMMOGRAMS 927
Figure 41.1 A patient aged 61 years presented with a breast lump. Mammography reveals a
2 cm tumour and extensive microcalcifications, as indicated by the arrows. Diagnostically, this
is straightforward.
examination. Note that the subtlety of clinical signs means that in practice only loss-less
image compression can be used.
Medical images have poor SNR, relative to good quality charge-coupled device (CCD)
images (the latter is nowadays less than 1% noise, a factor of 5 to 10 better than most
medical images). It is important to realise that there are distortions of many kinds in
medical images. As well as high frequency noise (that is rarely Gaussian), there are
degrading effects, such as the ‘bias field’, a low-frequency distortion due to imperfections
in the MRI receiver coil. Such a degradation of an image may appear subtle, and may be
discounted by the (expert) human eye; but it can distort massively the results of automatic
tissue classification and segmentation algorithms, and give wildly erroneous results for
algorithms attempting quantitative analysis of an image.
Over the past fifteen years there has been substantial effort aimed at medical image
analysis – the interested reader is referred to journals such as IEEE Transactions on Med-
ical Imaging or Medical Image Analysis, as well as conference proceedings such as
MICCAI (Medical Image Computation and Computer-Assisted Intervention). There has
been particular effort expended upon image segmentation to detect regions-of-interest:
shape analysis, motion analysis, and non-rigid registration of data, for example, from
different patients. To be deployed in clinical practice, an algorithm has to work 24/7 with
extremely high sensitivity and specificity. This is a tough specification to achieve even for
images of relatively simple shapes and in cases for which the lighting and camera-subject
pose can be controlled; it is doubly difficult for medical images, for which none of these
simplifying considerations apply. There is, in fact, a significant difference between image
analysis that uses medical images to illustrate the performance of an algorithm, and med-
ical image analysis, in which application-specific information is embedded in algorithms
in order to meet the demanding performance specifications.
928 MICHAEL BRADY ET AL.
We noted in the previous section that clinicians often find themselves drowning in
data. One potential solution is data fusion – the integration of diverse data sets in a single
cohesive framework – which provides the clinician with information rather than data. For
example, as we noted above, PET, and SPECT can help identify the microvasculature
grown by a tumour. However, the spatial resolution of PET is currently relatively poor
(e.g. 3 to 8 mm voxels), too poor to be the basis for planning (say) radiotherapy. On the
other hand, CT has excellent spatial resolution; but it does not show soft tissues such
as grey matter, white matter, or a brain tumour. Data fusion relates information in the
CT with that in the PET image, so that the clinician not only knows that there is a
tumour but where it is. Examples of data fusion can be found by visiting the Website:
http://www.mirada-solutions.com
PACS systems have encouraged the adoption of standards in file format, particularly the
DICOM standards – digital communication in medicine. In principle, apart from the raw
image data, DICOM specifies the patient identity, the time and place at which the image
was taken, gives certain technical information (e.g. pulse sequence, acquisition time),
specifies out the region imaged, and gives information such as the number of slices, and
so on.
Such is the variety of imaging types and the rate of progress in the field that DICOM
is currently an often frustrating, emerging set of standards.
41.3 MAMMOGRAPHY
41.3.1 Breast cancer facts
Breast cancer is a major problem for public health in the Western world, where it is the
most common cancer among women. In the European Community, for example, breast
cancer represents 19% of cancer deaths and fully 24% of all cancer cases. It is diagnosed
in a total of 348 000 cases annually in the United States and the European Community
and kills almost 115 000 annually. Approximately 1 in 8 of women will develop breast
cancer during the course of their lives, and 1 in 28 will die of the disease. According to
the World Health Organization, there were 900 000 new cases worldwide in 1997. Such
grim statistics are now being replicated in eastern countries as diets and environment
become more like their western counterparts.
During the past sixty years, female death rates in the United States from breast can-
cer stayed remarkably constant while those from almost all other causes declined. The
sole exception is lung cancer death rates, which increased sharply from 5 to 26 per
100 000. It is interesting to compare the figures for breast cancer with those from cer-
vical cancer, for which mortality rates declined by 70% after the cervical smear gained
widespread acceptance.
The earlier a tumour is detected the better the prognosis. A tumour that is detected
when its size is just 0.5 cm has a favourable prognosis in about 99% of cases, since it
is highly unlikely to have metastasized. Few women can detect a tumour by palpation
(breast self-examination) when it is smaller than 1 cm, by which time (on average) the
tumour will have been in the breast for up to 6 to 8 years. The five-year survival rate
eDIAMOND: A GRID-ENABLED FEDERATED DATABASE OF ANNOTATED MAMMOGRAMS 929
for localized breast cancer is 97%; this drops to 77% if the cancer has spread by the
time of diagnosis and to 22% if distant metastases are found (Journal of the National
Cancer Institute).
This is the clear rationale for screening, which is currently based entirely on X ray
mammography (though see below). The United Kingdom was the first country to develop
a national screening programme, though several other countries have established such pro-
grammes: Sweden, Finland, The Netherlands, Australia, and Ireland; France, Germany and
Japan are now following suit. The first national screening programme was the UK Breast
Screening Programme (BSP), which began in 1987. Currently, the BSP invites women
between the ages of 50 and 64 for breast screening every three years. If a mammogram
displays any suspicious signs, the woman is invited back to an assessment clinic where
other views and other imaging modalities are utilized. Currently, 1.3 million women are
screened annually in the United Kingdom. There are 92 screening centres with 230 radi-
ologists, each radiologist reading on average 5000 cases per year, but some read up to
20 000.
The restriction of the BSP to women aged 50 and above stems from fact that the
breasts of pre-menopausal women, particularly younger women, are composed primarily
of milk-bearing tissue that is calcium-rich; this milk-bearing tissue involutes to fat during
the menopause – and fat is transparent to X rays. So, while a mammogram of a young
woman appears like a white-out, the first signs of tumours can often be spotted in those
of post-menopause women. In essence, the BSP defines the menopause to be substantially
complete by age 50!
The UK programme resulted from the Government’s acceptance of the report of the
committee chaired by Sir Patrick Forrest. The report was quite bullish about the effects
of a screening programme:
by the year 2000 the screening programme is expected to prevent about 25% of deaths
from breast cancer in the population of women invited for screening
On average
each of the w omen in whom breast cancer is prevented will live about 20 years more.
Thus by the year 2000 the screening programme is expected to result in about 25 000
extra years of life gained annually in the UK.
To date, the BSP has screened more than eleven million women and has detected over
65 000 cancers. Research published in the BMJ in September 2000 demonstrated that
the National Health Service (NHS) Breast Screening Programme is saving at least 300
lives per year. The figure is set to rise to 1250 by 2010. More precisely, Moss (British
Medical Journal 16/9/2000), demonstrated that the NHS breast screening program, begun
in 1987, resulted in substantial reductions in mortality from breast cancer by 1998. In
1998, mortality was reduced by an average of 14.9% in those aged 50 to 54 and 75 to 79,
which would be attributed to treatment improvements. In the age groups also affected by
screening (55 to 69), the reduction in mortality was 21.3%. Hence, the estimated direct
contribution from screening was 6.4%.
Recent studies suggest that the rate of interval at which cancers appear between suc-
cessive screening rounds is turning out to be considerably larger than predicted in the
Forrest Report. Increasingly, there are calls for mammograms to be taken every two years
and for both a CC and MLO image to be taken of each breast.
930 MICHAEL BRADY ET AL.
Currently, some 26 million women are screened in the United States annually (approx-
imately 55 million worldwide). In the United States there are 10 000 mammography-
accredited units. Of these, 39% are community and/or public hospitals, 26% are private
radiology practices, and 13% are private hospitals. Though there are 10 000 mammogra-
phy centres, there are only 2500 mammography specific radiologists – there is a world-
wide shortage of radiologists and radiologic technologists (the term in the United Kingdom
is radiographers). Huge numbers of mammograms are still read by non-specialists, con-
travening recommended practice, nevertheless continuing with average throughput rates
between 5 and 100 per hour. Whereas expert radiologists have cancer detection rates of
76 to 84%, generalists have rates that vary from between 8 to 98% (with varying numbers
of false-positives). The number of cancers that are deemed to be visible in retrospect, that
is, when the outcome is known, approaches 70% (American Journal of Roentgenology
1993). Staff shortages in mammography seem to stem from the perception that it is ‘bor-
ing but risky’: as we noted earlier, 12% of all malpractice lawsuits in the United States
are against radiologists, with the failure to diagnose breast cancer becoming one of the
leading reasons for malpractice litigation (AJR 1997 and Clark 1992). The shortage of
radiologists is driving the development of specialist centres and technologies (computers)
that aspire to replicate their skills. Screening environments are ideally suited to computers,
as they are repetitive and require objective measurements.
As we have noted, screening has already produced encouraging results. However, there
is much room for improvement. For example, it is estimated that a staggering 25% of
cancers are missed at screening. It has been demonstrated empirically that double reading
greatly improves screening results; but this is too expensive and in any case there are
too few screening radiologists. Indeed, recall rates drop by 15% when using 2 views of
each breast (British Medical Journal, 1999). Double reading of screening mammograms
has been shown to half the number of cancers missed. However, a study at Yale of board
certified, radiologists showed that they disagreed 25% of the times about whether a biopsy
was warranted and 19% of the time in assigning patients to 1 of 5 diagnostic categories.
Recently, it has been demonstrated that single screening plus the use of computer-aided
diagnosis (CAD) tools – image analysis algorithms that aim to detect microcalcifications
and small tumours – also greatly improve screening effectiveness, perhaps by as much
as 20%.
Post-screening, the patient may be assessed by other modalities such as palpation,
ultrasound and increasingly, by MRI. 5 to 10% of those screened have these extended
‘work-up’. Post work-up, around 5% of patients have a biopsy. In light of the number
of tumours that are missed at screening (which reflects the complexity of diagnosing
the disease from a mammogram), it is not surprising that clinicians err on the side of
caution and order a large number of biopsies. In the United States, for example, there are
over one million biopsies performed each year: a staggering 80% of these reveal benign
(non-cancerous) disease.
It has been reported that between screenings 22% of previously taken mammograms
are unavailable or are difficult to find, mostly because of the fact that they have been
misfiled in large film archives – lost films are a daily headache for radiologists around
the world, 50% were obtained only after major effort, Bassett et al. (American Journal of
Roentgenology, 1997).
eDIAMOND: A GRID-ENABLED FEDERATED DATABASE OF ANNOTATED MAMMOGRAMS 931
41.3.2 Mammographic images and standard mammogram form (SMF)
Figure 41.2 is a schematic of the formation of a (film-screen) mammogram. A collimated
beam of X rays passes through the breast and is compressed (typically to a force of
14 N) between two Lucite plates. The X-ray photons that emerge from the lower plate
pass through the film before being converted to light photons, which then expose the film,
which is subsequently scanned (i.e. converted to electrons) at a resolution (typically) of
50
µ. In the case of full-field digital mammography, the X-ray photons are converted
directly to electrons by an amorphous silicon sensor that replaces the film screen. As
Figure 41.2 also shows, a part of the X-ray flux passes in a straight line through the
breast, losing a proportion of less energetic photons en route as they are attenuated by the
tissue that is encountered. The remaining X-ray photon flux is scattered and arrives at the
sensor surface from many directions (which are, in practice, reduced by an anti-scatter
grid, which has the side-effect of approximately doubling the exposure of the breast). Full
details of the physics of image acquisition, including many of the distorting effects, and
the way in which image analysis algorithms can be developed to undo these distortions,
are presented in Highnam and Brady [5].
For the purposes of this article, it suffices to note that though radiologic technolo-
gists are well trained, the control over image formation is intrinsically weak. This is
illustrated in Figure 41.3, which shows the same breast imaged with two different expo-
sure times. The images appear very different. There are many parameters p that affect
the appearance of a mammogram, including: tube voltage, film type, exposure time, and
placement of an automatic exposure control. If these were to vary freely for the same
compressed breast, there would be huge variation in image brightness and contrast. Of
course, it would be ethically unacceptable to perform that experiment on a living breast:
the accumulated radiation dose would be far too high. However, it is possible to develop
a mathematical model of the formation of a mammogram, for example, the Highnam-
Brady physics model. With such a model in hand, the variation in image appearance
can be simulated. This is the basis of the teaching system VirtualMammo developed
Collimator
Primary
Glare
X-ray target
Compression plate
Scattered photon
Film screen
cassette, anti-scatter
grid, and intensifier
Figure 41.2 Schematic of the formation of a mammogram.
932 MICHAEL BRADY ET AL.
Figure 41.3 Both sets of images are of the same pair of breasts, but the left pair is scanned with
a shorter exposure time than the right pair – an event that can easily happen in mammography.
Image processing algorithms that search for ‘bright spots’ will be unable to deal with such changes.
by Mirada Solutions Limited in association with the American Society of Radiologic
Technologists (ASRT).
The relatively weak control on image formation, coupled with the huge change in
image appearance, at which Figure 41.3 can only hint, severely limits the usefulness of
the (huge) databases that are being constructed – images submitted to the database may
tell more about the competence of the technologists who took the image, or the state of the
equipment on which the image was formed, than about the patient anatomy/physiology,
which is the reason for constructing the database in the first place! It is precisely this
problem that the eDiamond project aims to address.
In the course of developing an algorithm to estimate, and correct for, the scattered
radiation shown in Figure 41.2, Highnam and Brady [5] made an unexpected discovery:
it is possible to estimate, accurately, the amount of non-fat tissue in each pixel column
of the mammogram. More precisely, first note that the X-ray attenuation coefficients of
normal, healthy tissue and cancerous tissue are very nearly equal, but are quite different
from that of fat. Fat is clinically uninteresting, so normal healthy and cancerous tissues
are collectively referred to as ‘interesting’: Highnam and Brady’s method estimates – in
millimetres – the amount of interesting tissue in each pixel column, as is illustrated in
Figure 41.4.
The critical point to note is that the interesting tissue representation refers only to (pro-
jected) anatomical structures – the algorithm has estimated and eliminated the particular
parameters p(I) that were used to form this image I. In short, the image can be regarded
as standardised. Images in standardised form can be included in a database without the
confounding effect of the (mostly irrelevant – see below) image formation parameters.
This greatly increases the utility of that database. Note also that the interesting tissue
representation is quantitative: measurements are in millimetres, not in arbitrary contrast
units that have no absolute meaning.
[...]... applications: the use of a federated database for quality eDIAMOND: A GRID- ENABLED FEDERATED DATABASE OF ANNOTATED MAMMOGRAMS 941 control, for training and testing a system to detect microcalcification clusters, and to initiate work on using the grid to support epidemiological studies Third, we are just beginning work on a project entitled Grid- Enabled Knowledge Services: Collaborative Problem Solving Environments... AKT and Medical Image and Signal (MIS) technologies in a Grid services context Although this work focuses on medical application the majority of the research has generic applicability to many e-Science areas The project aims at the use of the Grid to solve a pressing – and typical – medical problem rather than seeking primarily to develop the Grid architecture and software base However, it seeks to... development, testing 938 MICHAEL BRADY ET AL and validation of the system on a set of important applications We consider each of these in turn 41.5.3.1 Development of the Grid infrastructure There are a number of aspects to the development of a Grid infrastructure for eDiamond The first such aspect is security: ensuring secure file transfer, and tackling the security issues involved in having patient records... satellite locations 41.6 RELATED PROJECTS The eDiamond project (in most cases, deliberate) overlaps with several other grid projects, particularly in mammography, and more generally in medical image analysis First, the project has strong links to the US NDMA project, which is exploring the use of Grid technology to enable a database of directly digitised (as opposed to film-screen) mammograms IBM is also the... the US NDMA project, and has provided a Shared University Research (SUR) grant to create the NDMA Grid under the leadership of the University of Pennsylvania Now in Phase II of deployment, the project connects hospitals in Pennsylvania, Chicago, North Carolina, and Toronto The architecture of the NDMA Grid leverages the strengths of the IBM’ eServer clusters – running AIX and Linux – with open protocols... practical experience gained in the development of the NDMA Grid It is expected that eDiamond and NDMA will collaborate increasingly closely A critical difference between eDiamond and the NDMA project will be that eDiamond will use standardisation techniques prior to image storage in the database Second, there is a complementary European project Mammogrid, which also involves Oxford University and Mirada... Large, federated databases both of metadata and images • Data compression and transfer • Effective ways of combining Grid- enabled databases of information that must be protected and which will be based in hospitals that are firewall-protected • Very rapid data mining techniques • A secure Grid infrastructure for use within a clinical environment 41.5.3 Objectives It is currently planned to construct a... consumption by others, and, as such, it adopts a service-oriented view of the Grid Moreover, this view is based upon the notion of various entities providing services to one another under various forms of contract and provides one of the main research themes being investigated – agent-oriented delivery of knowledge services on the Grid The project aims to extend the research ambitions of the AKT and MIAS... diagnosis will be developed There are three main objectives to the initial phase of the project: the development of the Grid technology infrastructure to support federated databases of huge images (and related information) within a secure environment; the design and construction of the Gridconnected workstation and database of standardised images; and the development, testing 938 MICHAEL BRADY ET AL and... interrogating video images, and large-scale data compression for visualisation Within the wider UK community, Grid- enabling database technologies is a fundamental component of several of the EPSRC Pilot Projects, and is anticipated to be one of the United Kingdom’s primary contributions to the proposed Open Grid 3 MIAS is directed by Michael Brady and includes the Universities of Oxford, Manchester, King’s . 41.4 discusses, which are the subject of related projects. The remainder of
Grid Computing – Making the Global Infrastructure a Reality. Edited by F. Berman,. these
in turn.
41.5.3.1 Development of the Grid infrastructure
There are a number of aspects to the development of a Grid infrastructure for eDiamond.
The first