Skip to main content
Advertisement

Main menu

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Other Publications
    • ajnr

User menu

  • Alerts
  • Log in

Search

  • Advanced search
American Journal of Neuroradiology
American Journal of Neuroradiology

American Journal of Neuroradiology

ASHNR American Society of Functional Neuroradiology ASHNR American Society of Pediatric Neuroradiology ASSR
  • Alerts
  • Log in

Advanced Search

  • Home
  • Content
    • Current Issue
    • Accepted Manuscripts
    • Article Preview
    • Past Issue Archive
    • AJNR Case Collection
    • Case of the Week Archive
    • Classic Case Archive
    • Case of the Month Archive
  • Special Collections
    • Spinal CSF Leak Articles (Jan 2020-June 2024)
    • 2024 AJNR Journal Awards
    • Most Impactful AJNR Articles
  • Multimedia
    • AJNR Podcast
    • AJNR Scantastics
    • Video Articles
  • For Authors
    • Submit a Manuscript
    • Author Policies
    • Fast publishing of Accepted Manuscripts
    • Graphical Abstract Preparation
    • Manuscript Submission Guidelines
    • Imaging Protocol Submission
    • Submit a Case for the Case Collection
  • About Us
    • About AJNR
    • Editorial Board
  • More
    • Become a Reviewer/Academy of Reviewers
    • Subscribers
    • Permissions
    • Alerts
    • Feedback
    • Advertisers
    • ASNR Home
  • Follow AJNR on Twitter
  • Visit AJNR on Facebook
  • Follow AJNR on Instagram
  • Join AJNR on LinkedIn
  • RSS Feeds

Welcome to the new AJNR, Updated Hall of Fame, and more. Read the full announcements.


AJNR is seeking candidates for the position of Associate Section Editor, AJNR Case Collection. Read the full announcement.

 

Error message

  • The specified file temporary://fileqeniyU could not be copied, because the destination directory is not properly configured. This may be caused by a problem with file or directory permissions. More information is available in the system log.
  • The specified file temporary://fileULXbmQ could not be copied, because the destination directory is not properly configured. This may be caused by a problem with file or directory permissions. More information is available in the system log.
Research ArticleAdult Brain
Open Access

Generative Adversarial Network–Enhanced Ultra-Low-Dose [18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations

K.T. Chen, R. Tesfay, M.E.I. Koran, J. Ouyang, S. Shams, C.B. Young, G. Davidzon, T. Liang, M. Khalighi, E. Mormino and G. Zaharchuk
American Journal of Neuroradiology September 2023, 44 (9) 1012-1019; DOI: https://doi.org/10.3174/ajnr.A7961
K.T. Chen
aFrom the Department of Biomedical Engineering (K.T.C.), National Taiwan University, Taipei, Taiwan
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for K.T. Chen
R. Tesfay
cMeharry Medical College (R.T.), Nashville, Tennessee
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for R. Tesfay
M.E.I. Koran
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M.E.I. Koran
J. Ouyang
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for J. Ouyang
S. Shams
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
C.B. Young
dDepartment of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for C.B. Young
G. Davidzon
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for G. Davidzon
T. Liang
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
M. Khalighi
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M. Khalighi
E. Mormino
dDepartment of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for E. Mormino
G. Zaharchuk
bDepartment of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for G. Zaharchuk
  • Article
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF
Loading

Abstract

BACKGROUND AND PURPOSE: With the utility of hybrid τ PET/MR imaging in the screening, diagnosis, and follow-up of individuals with neurodegenerative diseases, we investigated whether deep learning techniques can be used in enhancing ultra-low-dose [18F]-PI-2620 τ PET/MR images to produce diagnostic-quality images.

MATERIALS AND METHODS: Forty-four healthy aging participants and patients with neurodegenerative diseases were recruited for this study, and [18F]-PI-2620 τ PET/MR data were simultaneously acquired. A generative adversarial network was trained to enhance ultra-low-dose τ images, which were reconstructed from a random sampling of 1/20 (approximately 5% of original count level) of the original full-dose data. MR images were also used as additional input channels. Region-based analyses as well as a reader study were conducted to assess the image quality of the enhanced images compared with their full-dose counterparts.

RESULTS: The enhanced ultra-low-dose τ images showed apparent noise reduction compared with the ultra-low-dose images. The regional standard uptake value ratios showed that while, in general, there is an underestimation for both image types, especially in regions with higher uptake, when focusing on the healthy-but-amyloid-positive population (with relatively lower τ uptake), this bias was reduced in the enhanced ultra-low-dose images. The radiotracer uptake patterns in the enhanced images were read accurately compared with their full-dose counterparts.

CONCLUSIONS: The clinical readings of deep learning–enhanced ultra-low-dose τ PET images were consistent with those performed with full-dose imaging, suggesting the possibility of reducing the dose and enabling more frequent examinations for dementia monitoring.

ABBREVIATIONS:

AC1
Gwet's agreement coefficient 1
AD
Alzheimer's disease
CNN
convolutional neural network
GAN
generative adversarial network
SUVR
standard uptake value ratio

More than 6 million individuals are living with Alzheimer's disease (AD) in the United States. By 2060, this number is projected to increase to nearly 14 million (https://www.cdc.gov/aging/aginginfo/alzheimers.htm). This neurodegenerative disorder leads to progressive, irreversible loss of memory and behavioral function.1 Pathologic features of AD include accumulation of amyloid β into extracellular plaques and hyperphosphorylated τ into intracellular neurofibrillary tangles, which can be identified with PET imaging.2 Abnormalities of τ mediate amyloid β–induced toxicity3 and are a close proxy of clinical status.4 Furthermore, because pathologic processes of AD begin decades before mild cognitive impairment and dementia stages, in vivo measurements of amyloid β plaques and tangles could enable early detection and an opportunity for intervention.5 Along these lines, recent work has shown that subtle elevations in τ PET can be detected in clinically healthy older adults and are predictive of subsequent decline.6

Advanced modalities such as simultaneous PET/MR imaging provide complementary morphologic and functional information with perfect spatiotemporal registration of the 2 imaging data sets,7 all of which can facilitate the diagnosis and monitoring of dementia.8,9 However, radiation exposure related to the radiotracers administered to imaging subjects presents barriers to screening, clinical follow-up, and research participation due to radiation dose thresholds. Therefore, radiotracer dose reductions have been a target for intervention for many researchers.

Deep learning methods such as convolutional neural networks (CNNs) have been used for image identification,10 generation,11,12 segmentation,13 and MR imaging–based attenuation correction.14,15 CNNs that incorporate spatially correlated MR imaging and PET information to produce standard-quality PET images from low-dose PET acquisitions (though most such studies were conducted on [18F]-fludeoxyglucose scans) have been implemented.16⇓⇓⇓⇓-21 For example, deep CNNs can reduce the radiotracer dose by at least 100-fold for [18F]-florbetaben, an in vivo biomarker of amyloid plaque buildup,22 and enhancement of both simulated (undersampled in PET/MR imaging reconstruction) and true (injected with ultra-low-dose) ultra-low-dose images resulted in the production of diagnostic-quality images comparable with standard dose images.23

Here, we investigate whether similar techniques in deep learning can be used to enhance ultra-low-dose [18F]-PI-2620 τ2 PET/MR images to produce diagnostic-quality images. Compared with amyloid PET and plasma phosphorylated τ biomarkers,24,25 τ PET has its strength in discriminating AD from other neurodegenerative disease26 and can aid in regional cerebral τ analysis for the identification of various tauopathies. With the uptake of a τ tracer being more focal and having a weaker signal than amyloid PET images in general, we have found that directly applying the CNN in our previous work carries over data bias from the amyloid PET training data set.27 Therefore, in this work, we implemented a generative adversarial network (GAN) structure28 in addition to training the ultra-low-dose τ enhancement CNN from scratch. In addition, in this article, we have focused on aging participants as well as those with a variety of neurodegenerative diseases. Examining asymptomatic/early dementia populations, including preclinical AD and mild cognitive impairment, is increasingly important for dementia studies, but these groups are difficult to image because the PET signal can be lower and restricted to the medial temporal lobe compared with those with AD dementia.29,30 Patients who are amyloid-positive and mild τ-positive who are most likely in an asymptomatic or mild cognitive impairment stage may also require more frequent follow-up scans to monitor for disease progression.

Unlike the use of PET for cancer monitoring, patients with dementia may have much longer periods than patients with cancer in which to accrue the negative effects of medical radiation, especially if the use of image monitoring expands, beginning in the asymptomatic or minimally symptomatic stages. Reducing the PET tracer dose can lead to safer scans and increase the utility of hybrid PET/MR imaging for screening, clinical diagnoses, and longitudinal studies (improved follow-up adherence). With the increasing availability of data and research participation, researchers can also better understand the pathogenesis and identify targets for pharmacotherapy. At the population level, reducing dosing has the potential to decrease health care costs to individual patients as well as research and health care institutions.

MATERIALS AND METHODS

Forty-four total participants were recruited for this study, approved by the Stanford University institutional review board. Written informed consent was obtained from all participants or an authorized surrogate decision-maker. Older healthy controls were recruited through the Stanford Aging and Memory Study (SAMS; https://www.alzheimers.gov/clinical-trials/stanford-memory-and-aging-study). Patients with cognitive impairment (either a clinical diagnosis of mild cognitive impairment or AD dementia) and semantic-variant primary-progressive aphasia were recruited through the Stanford Alzheimer Disease Research Center or the Stanford Center for Memory Disorders. Demographics of the patient group, including their clinical diagnoses (determined by clinical consensus of a panel of neurologists and neuropsychologists), are shown in Table 1. In particular, 7 participants from the healthy controls were amyloid-positive as determined by CSF (details in Trelle et al31).

View this table:
  • View inline
  • View popup
Table 1:

Demographics and clinical indications of study population

PET/MR Imaging Data Acquisition

T1-weighted and T2-FLAIR MR imaging data and τ PET data were simultaneously acquired on an integrated 3T PET/MR imaging scanner (Signa; GE Healthcare); 221 [SD, 61] MBq of the τ radiotracer [18F]-PI-2620 was injected, and imaging was performed between 60 and 90 minutes after injection. The raw list-mode PET data were reconstructed for the full-dose ground truth image and were also randomly undersampled by a factor of 20 (approximately 5% of original count level) and then reconstructed to produce an ultra-low-dose PET image. Previous studies have suggested that this method of simulation of ultra-low-dose imaging is a good representation of the true injected ultra-lowdose.23 TOF ordered subsets expectation maximization, with 2 iterations and 28 subsets and accounting for randoms, scatter, dead-time, and attenuation, was used for all PET image reconstructions. MR imaging attenuation correction was performed using the vendor's zero TE–based method, and a 4-mm postreconstruction Gaussian filter was used for all reconstructions.

Image Preprocessing

To account for any positional offset of the patient during different acquisitions, we coregistered MR images to the PET images using the FMRIB Linear Image Registration Tool (FLIRT; http://www.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT),32 with 6 df and the correlation ratio as the cost function. All images were resliced to the dimensions of the acquired PET volumes: eighty-nine 2.78-mm-thick slices with 256-by-256 1.17 × 1.17 mm2 pixels. A head mask was made from the T1-weighted image through intensity thresholding and hole filling and applied to the PET and MR images. The voxel intensities of each volume were normalized using its z score (mean divided by SD within a FreeSurfer-based brain mask [http://surfer.nmr.mgh.harvard.edu] derived from the T1-weighted images) and used as inputs to the CNN.

CNN Implementation

The ultra-low-dose τ network was trained using a GAN structure28 with 3916 input slices (44 data sets with 89 slices each). The generator portion of the GAN used the proposed structure in Chen et al,22,23 which included an encoder-decoder CNN with the U-Net33 structure (Fig 1, upper image) where the inputs were the concatenation of multicontrast MR images (T1 and T2 FLAIR-weighted) and the ultra-low-dose PET image. The full-dose PET image was treated as the ground truth and the network was trained through residual learning.11 Briefly, the encoder portion is composed of layers that perform 2D convolutions (using 3 × 3 filters) on input 256-by-256 transverse slices, batch normalization, and rectified linear unit activation operations. We used 2-by-2 max pooling to reduce the dimensionality of the data. In the decoder portion, the data in the encoder layers are concatenated with those in the decoder layers. Linear interpolation is performed to restore the data to its original dimensions. In addition, a discriminator (Fig 1, lower image) was added to distinguish whether the output image is realistic or not. The discriminator portion of the GAN consists of 5 convolution blocks, which are composed of convolution layers with 4 × 4 filters and 2 × 2 stride, batch normalization, and leaky rectified linear activation with the slope of 0.2. A convolution layer with a 3 × 3 filter is added to map the features to 1 channel as the output. The final objective for the encoder-decoder network is the combination of a pixel-wise L1 loss and an adversarial loss: Formula where x is the input images, y is the standard dose image, and Formula is the enhanced image. The GAN was trained with an initial learning rate of 0.0001 and a batch size of 16 over 50 epochs. The training, validation, and testing data were split at the participant level for an approximate 7:1:2 ratio, and 5-fold cross-validation was used to employ all the data for training and testing.

FIG 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 1.

A schematic of the GAN (generator network, upper image; discriminator network, lower image) used in this work and its input and output channels. The arrows denote computational operations, and the tensors are denoted by boxes, with the number of channels indicated above each box. BN indicates batch normalization; Conv, convolution; Max, maximum; ReLU, rectified linear unit; tanh, hyperbolic tangent.

Assessment of Image Quality

The reconstructed images were first visually inspected for artifacts. For each data set, the region within the brain mask was considered for voxel-based analyses. For each axial section, the image quality of the enhanced PET images and the original ultra-low-dose PET images within the brain mask were compared with the full-dose image using the peak SNR, structural similarity,34 and root mean square error. The metrics for each subject were obtained by a weighted average (by voxel number) of the slices.

Clinical Readings

The enhanced PET images, the ultra-low-dose PET image, and the full-dose PET image of each data set were anonymized, and their series numbers were randomized and then presented by series number to 3 physicians (M.E.I.K., a dual-boarded nuclear medicine and diagnostic radiology physician; S.S., a neuroradiology fellow; G.Z., a neuroradiologist) for independent reading (reading protocol for the identification of regional uptake is available in the Online Supplemental Data). Ten random full-dose PET images were also presented to the physicians to evaluate intrareader reproducibility. The consensus τ status read from the 3 reviewers on the full-dose images was treated as the ground truth. For each PET image, the physicians also assigned a subjective image-quality score on a 5-point scale: 1 = uninterpretable, 2 = poor, 3 = adequate, 4 = good, 5 = excellent. Also, image-quality scores were dichotomized into 1–2 (low) versus 3–5 (high), with the percentage of images with high scores calculated for each method. The agreement of the 3 readers was assessed using the Gwet's agreement coefficient 1 (AC1)35 on the full-dose readings, and if high agreement was found, the readings of the 3 readers were pooled for further analysis.

Region-Based Analysis

Region-based analyses were performed to assess the agreement of the tracer uptake among images. Cortical parcellations and cerebral segmentations based on the Desikan-Killiany Atlas36 were derived from FreeSurfer and analyses focused on the medial temporal lobe, comprising the entorhinal cortex and amygdala and the inferior temporal cortex. The inferior cerebellum was used as the reference region for standard uptake value ratio (SUVR) calculations for all 3 (full-dose, ultra-low-dose, and enhanced) image types. The SUVRs were compared between methods (full-dose to ultra-low-dose and full-dose to enhanced) and evaluated by Bland-Altman plots. Focus was on the healthy controls positive for amyloid, and these participants were labeled separately on the plots. The coefficient of variation (SD divided by the mean uptake) in the medial temporal lobe and the inferior temporal cortex were also calculated to assess image noise in the image types.

Statistical Analysis

For quantitative tests, paired t tests at the P = .05 level were performed to compare peak SNR, structural similarity, and root mean square error metrics between the ultra-low-dose images and their CNN-enhanced counterparts.

Pair-wise t tests were also performed to compare the values of the image-quality metrics across the different image-processing methods. The accuracy, sensitivity, and specificity were calculated for the readings of the ultra-low-dose and enhanced PET images. Symmetry tests were also performed to examine whether the readings produced an equal number of false-positives and -negatives. The agreement of the 3 readers was assessed using Gwet's AC1.35 Average image scores for each method are presented. The 95% confidence interval for the difference in the proportions of high scores was constructed and compared with a predetermined noninferiority benchmark of 15%. Tests were conducted at the P = .05 level (Bonferroni correction to account for multiple comparisons when necessary).

RESULTS

The enhanced τ images showed apparent noise reduction with smoother image texture compared with the ultra-low-dose images (Fig 2). Quantitatively, the 3 image-based metrics all improved significantly (P < .05/3, Fig 3) after enhancement of the ultra-low-dose images. The regional coefficient of variation in regional SUVRs was reduced in the enhanced image types (P < .001 for all comparisons with the enhanced images), indicating noise reduction in the images (Fig 4). The regional SUVRs showed generally low bias and variability between the full-dose images and other image types. While there is an underestimation in the SUVRs for both image types on average (influenced by regions with higher uptake, though the slight overestimation by the ultra-low-dose images contributed to a smaller coefficient of variation than that of the full-dose images) when focusing on the healthy-but-amyloid-positive population (with generally lower τ uptake), this bias was reduced (P < .025, paired t tests corrected for 2 comparisons) in the enhanced images (average SUVR difference: 0.0101 [SD, 0.0312] in the inferior cortex and −0.0014 [SD, 0.1238] in the medial temporal lobe) relative to that in the ultra-low-dose images (average SUVR difference: −0.0153 [SD, 0.0374] in the inferior cortex and −0.0566 [SD, 0.1451] in the medial temporal lobe) (Fig 5).

FIG 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 2.

Representative τ PET images and their corresponding T1-weighted MR image in 2 individuals positive for amyloid. The enhanced PET image shows greatly reduced noise compared with the ultra-low-dose PET image. Arrows correspond to regions of abnormal elevated τ uptake. MCI indicates mild cognitive impairment.

FIG 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 3.

Image-quality metrics comparing the ultra-low-dose PET (LD) and the ultra-low-dose enhanced PET (E) images with the ground truth full-dose PET image. PSNR indicates peak signal-to-noise ratio; SSIM, structural similarity; RMSE, root mean square error.

FIG 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 4.

Mean (SD) of SUVR coefficient of variation in selected brain regions. E indicates enhanced images; FD, full-dose images; Inf. Cerebel, inferior cerebellum; MTL, medial temporal lobe; LD, ultra-low-dose image; Inf. Temporal, inferior temporal cortex

FIG 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 5.

Bland-Altman plots comparing mean SUVRs in the ultra-low-dose PET and the enhanced PET with the full-dose PET images. The red dots denote healthy controls positive for amyloid, and the regions selected are the FreeSurfer labels, which make up the bilateral medial temporal lobe (entorhinal, amygdala) and the bilateral inferior temporal cortex.

While the 3 readers have discussed and agreed to a reading protocol for the identification of regional uptake, they exhibited different preferences in reading the images based on the 5-point scale (Fig 6). However, when using the dichotomized scale, the readers showed agreement in their ratings of the 3 image types. Noninferiority tests at the predetermined threshold of −15% for subjective image quality showed that both the ultra-low-dose and enhanced images were inferior to the full-dose images.

FIG 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
FIG 6.

Quality scores of different image types as rated by 3 expert readers. Image quality scores: 1, uninterpretable; 2, bad; 3, adequate; 4, good; 5, excellent. FD indicates full-dose; LD, ultra-low-dose; E, enhanced.

The intrareader reproducibility and interreader agreement was high in reading the full-dose images (Tables 2 and 3). Among image types, the readers also had high agreement in evaluating the status of τ uptake in the regions (Gwet's AC1 > 0.65, Table 1); the uptake in the ultra-low-dose and ultra-low-dose enhanced images was read accurately (accuracy >0.84 for all relevant regions, Table 2) compared with their full-dose counterparts.

View this table:
  • View inline
  • View popup
Table 2:

Gwet's AC1 between and within readers of 10 randomly selected full-dose images on the tracer uptake in relevant brain regions and on the subjective image quality

View this table:
  • View inline
  • View popup
Table 3:

Accuracy, sensitivity, and specificity of ultra-low-dose images and enhanced images compared with the full-dose images

DISCUSSION

In this study, we have proposed a GAN structure to produce diagnostic-quality τ PET images from input representing a simulated 5% dose PET acquisition. There are many reasons to reduce the dose for dementia PET imaging, including enabling more frequent follow-up scans (under current radiation safety levels) to monitor for disease progression, especially in individuals who are asymptomatic or in the mild cognitive impairment stage. Another value is to extend the access of advanced tracers to more rural regions that are not within current service regions for radiotracer delivery, a problem that affects up to 10% of the US population. Because τ PET images generally show reduced and focal uptake compared with other radiotracers such as amyloid and also contain more image noise, we have chosen a GAN structure for training to generate images with more similar image texture compared with those generated with only a U-Net such as in Chen et al.22 We have shown in a previous study that directly using a U-Net trained on amyloid images to generate images is inferior to using a network trained with τ images, which took the image properties of the different radiotracers into account during training.27

The generated images show that the noise in the PET images is greatly reduced through network training. The lower coefficient of variation in selected regions relevant to the participant population also reflects this finding. In addition, the peak SNR, structural similarity, and root mean square error metrics show that the generated images also resemble the full-dose images more than their ultra-low-dose counterparts.

The Bland-Altman plots showed that the ultra-low-dose and enhanced images were similar in their regional SUVR biases compared with the full-dose images (Fig 5). In general, in participants with high uptake in the inferior temporal cortex, SUVR underestimation was present compared with the values in the full-dose images. However, when we examined the healthy control population positive for amyloid, a demographic that needs close attention in tracking the participants' progression, the enhanced images showed less bias when calculating the SUVRs in regions relevant to neurodegeneration. This finding shows the potential of using deep learning–based enhancement of ultra-low-dose PET images in subjects needing more frequent PET follow-up and is a first step in translating this method to routine clinical and scientific use for these subjects.

For the reader study, the readers rated 2 aspects of each image: subjective image quality and whether the images provide clinical information related to the τ imaging. For the former, a 5-point Likert scale was used; for the latter, because there is no official rating scale established for this particular τ radiotracer, the readers evaluated whether there is increased uptake in a number of regions relevant to τ imaging.

The results from subjective image quality showed that the readers had different preferences in reading the images. In fact, the readers each had a different preference: Reader 1 was generous in assigning quality scores for all image types; reader 2 showed a preference against the ultra-low-dose images, where there was more image noise; reader 3, on the contrary, did not prefer the enhanced images where more image-smoothing occurred. However, when we examined the dichotomized scale, the ratings from all 3 readers showed that there are more full-dose images scored as “high-quality” than the other 2 image types, in which the proportion of high-quality images for the ultra-low-dose and enhanced images fell below the noninferiority threshold. This finding also highlights the challenge of deep learning enhancement of PET images that have a weak focal uptake. It is possible that 95% undersampling is too great for the current GAN to synthesize similar quality, given the number of cases to which we had access.

In reading the uptake in relevant regions, Gwet's AC1 showed intrareader agreement (reproducibility) as well as interreader agreement in the radiotracer uptake of selected regions. Most interesting, the readings of different image types across the 3 readers also showed high agreement, indicating that both the enhanced images and the ultra-low-dose images could provide uptake information, similar to the full-dose images, with slight benefit for the enhanced images. Therefore, the enhanced ultra-low-dose images provide a tool for readers who do not prefer noisy images.

This study has several limitations. Because there is no official guideline on how to read τ images and how to evaluate their image quality, we evaluated the uptake patterns on the basis of criteria agreed to by the 3 readers, using a positive/negative scale in several important ROIs. Most (31/44) of the participants in this study were healthy controls, which would contribute to an imbalance in our training data. On the other hand, healthy controls are an important focus. We have shown, in a previous study, that matching target populations in the training and testing data is important for optimal results;37 if the training data were overweighted to patients with AD, the performance of the GAN might be suboptimal in a healthy control test set. Moreover, healthy controls are increasingly becoming the focus of research studies38 and early prevention clinical trials, highlighting the value of tracking τ in at-risk healthy controls in addition to patients with AD (when cognitive decline symptoms have already manifested).

In the future, more sophisticated networks will be evaluated to better replicate the image texture of the full-dose images. More complex networks could potentially allow further dose reduction to show differences between the ultra-low-dose and the enhanced images and minimize the effects of the network for reader preference. On the other hand, we did not experiment with simpler, non-deep-learning–based methods such as image filtering and their results in noise reduction. However, from our experience in training PET-only networks (which does not provide as much morphologic information as the results indicate),22 such methods tend to produce inferior results, and we suspect that this finding would be more likely for τ because its uptake is weaker and more focal. A larger and more diverse population in the participant population for the training and testing sets would also likely improve performance. Through data-acquisition of participants with higher τ uptake or those with more advanced dementia, we could expand our analyses to regions relevant to those populations, instead of solely focusing on the 3 regions in this work. We examined only 1 dose-reduction level, which was estimated on the basis of earlier work with FDG and amyloid tracers, partially to mitigate the demands on the 3 clinical readers. The use of low-dose images with less undersampling could produce improved results, and the results could be further confirmed with actual low-dose studies and region-based validation of SUVRs between the 2 low-dose regimens.

CONCLUSIONS

The deep learning–enhanced images could be read clinically for regional uptake patterns of τ accumulation, similar to the full-dose images. With further refinements, this technique can potentially increase the utility of hybrid PET/MR imaging in clinical diagnoses and longitudinal studies.

Footnotes

  • This work was supported by the National Institutes of Health (P41-EB015891, R01-EB025220, R56-AG071558, K99-AG068310-01A1, R01-AG048076, R21-AG058859), the Stanford Alzheimer's Disease Research Center (P30-AG06615), the Yushan Fellow Program by the Ministry of Education (NTU-112V1015-3, R.O.C.) the National Science and Technology Council grant (110-2222-E-002-015-MY3, R.O.C.), the National Health Research Institutes grant (NHRI-EX112-11205EC), and the Alzheimer's Association (AARFD-21-849349), GE Healthcare, and Life Molecular Imaging.

  • Disclosure forms provided by the authors are available with the full text and PDF of this article at www.ajnr.org.

Indicates open access to non-subscribers at www.ajnr.org

References

  1. 1.↵
    1. Gallucci M,
    2. Limbucci N,
    3. Catalucci A, et al
    . Neurodegenerative diseases. Radiol Clin North Am 2008;46:799–817, vii doi:10.1016/j.rcl.2008.06.002 pmid:18922294
    CrossRefPubMed
  2. 2.↵
    1. Mormino EC,
    2. Toueg TN,
    3. Azevedo C, et al
    . Tau PET imaging with 18F-PI-2620 in aging and neurodegenerative diseases. Eur J Nucl Med Mol Imaging 2021;48:2233–44 doi:10.1007/s00259-020-04923-7 pmid:32572562
    CrossRefPubMed
  3. 3.↵
    1. Nelson PT,
    2. Braak H,
    3. Markesbery WR
    . Neuropathology and cognitive impairment in Alzheimer disease: a complex but coherent relationship. J Neuropathol Exp Neurol 2009;68:1–14 doi:10.1097/NEN.0b013e3181919a48 pmid:19104448
    CrossRefPubMed
  4. 4.↵
    1. Nelson PT,
    2. Alafuzoff I,
    3. Bigio EH, et al
    . Correlation of Alzheimer disease neuropathologic changes with cognitive status: a review of the literature. J Neuropathol Exp Neurol 2012;71:362–81 doi:10.1097/NEN.0b013e31825018f7 pmid:22487856
    CrossRefPubMed
  5. 5.↵
    1. Sperling R,
    2. Mormino E,
    3. Johnson K
    . The evolution of preclinical Alzheimer's disease: implications for prevention trials. Neuron 2014;84:608–22 doi:10.1016/j.neuron.2014.10.038 pmid:25442939
    CrossRefPubMed
  6. 6.↵
    1. Ossenkoppele R,
    2. Smith R,
    3. Mattsson-Carlgren N, et al
    . Accuracy of tau positron emission tomography as a prognostic marker in preclinical and prodromal Alzheimer disease: a head-to-head comparison against amyloid positron emission tomography and magnetic resonance imaging. JAMA Neurol 2021;78:961–71 doi:10.1001/jamaneurol.2021.1858 pmid:34180956
    CrossRefPubMed
  7. 7.↵
    1. Judenhofer MS,
    2. Wehrl HF,
    3. Newport DF, et al
    . Simultaneous PET-MRI: a new approach for functional and morphological imaging. Nat Med 2008;14:459–65 doi:10.1038/nm1700 pmid:18376410
    CrossRefPubMed
  8. 8.↵
    1. Catana C,
    2. Drzezga A,
    3. Heiss W-D, et al
    . PET/MRI for neurologic applications. J Nucl Med 2012;53:1916–25 doi:10.2967/jnumed.112.105346 pmid:23143086
    Abstract/FREE Full Text
  9. 9.↵
    1. Drzezga A,
    2. Barthel H,
    3. Minoshima S, et al
    . Potential clinical applications of PET/MR imaging in neurodegenerative diseases. J Nucl Med 2014;55:47S–55S doi:10.2967/jnumed.113.129254 pmid:24819417
    Abstract/FREE Full Text
  10. 10.↵
    1. He KM,
    2. Xhand X,
    3. Ren S, et al
    . Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), December 7–13, 2015:1026–34. Santiago, Chile
  11. 11.↵
    1. Chen H,
    2. Zhang Y,
    3. Kalra MK, et al
    . Low-dose CT with a residual encoder-decoder convolutional neural network (RED-CNN). IEEE Transactions on Medical Imaging 2017;36:2524–35 doi:10.1109/TMI.2017.2715284 pmid:28622671
    CrossRefPubMed
  12. 12.↵
    1. Shen D,
    2. Wu G,
    3. Suk HI
    . Deep learning in medical image analysis. Annu Rev Biomed Eng 2017;19:221–48 doi:10.1146/annurev-bioeng-071516-044442 pmid:28301734
    CrossRefPubMed
  13. 13.↵
    1. Yu Y,
    2. Xie Y,
    3. Thamm T, et al
    . Use of deep learning to predict final ischemic stroke lesions from initial magnetic resonance imaging. JAMA Netw Open 2020;3:e200772 doi:10.1001/jamanetworkopen.2020.0772 pmid:32163165
    CrossRefPubMed
  14. 14.↵
    1. Liu F,
    2. Jang H,
    3. Kijowski R, et al
    . Deep learning MR imaging-based attenuation correction for PET/MR imaging. Radiology 2018;286:676–84 doi:10.1148/radiol.2017170700 pmid:28925823
    CrossRefPubMed
  15. 15.↵
    1. Torrado-Carvajal A,
    2. Vera-Olmos J,
    3. Izquierdo-Garcia D, et al
    . Dixon-VIBE deep learning (DIVIDE) pseudo-CT synthesis for pelvis PET/MR attenuation correction. J Nucl Med 2019;60:429–35 doi:10.2967/jnumed.118.209288 pmid:30166357
    Abstract/FREE Full Text
  16. 16.↵
    1. Bland J,
    2. Mehranian A,
    3. Belzunce MA, et al
    . MR-guided kernel EM reconstruction for reduced dose PET imaging. IEEE Trans Radiat Plasma Med Sci 2018;2:235–43 doi:10.1109/TRPMS.2017.2771490 pmid:29978142
    CrossRefPubMed
  17. 17.↵
    1. Chaudhari AS,
    2. Mittra E,
    3. Davidzon GA, et al
    . Low-count whole-body PET with deep learning in a multicenter and externally validated study. NPJ Digit Med 2021;4:127 doi:10.1038/s41746-021-00497-2 pmid:34426629
    CrossRefPubMed
  18. 18.↵
    1. Hosch R,
    2. Weber M,
    3. Sraieb M, et al
    . Artificial intelligence guided enhancement of digital PET: scans as fast as CT? Eur J Nucl Med Mol Imaging 2022;49:4503–15 doi:10.1007/s00259-022-05901-x pmid:35904589
    CrossRefPubMed
  19. 19.↵
    1. Lei Y,
    2. Dong X,
    3. Wang T, et al
    . Whole-body PET estimation from low count statistics using cycle-consistent generative adversarial networks. Phys Med Biol 2019;64:215017 doi:10.1088/1361-6560/ab4891 pmid:31561244
    CrossRefPubMed
  20. 20.↵
    1. Sanaat A,
    2. Arabi H,
    3. Mainta I, et al
    . Projection-space implementation of deep learning-guided low-dose brain PET imaging improves performance over implementation in image-space. J Nucl Med 2020;61:1388–96 doi:10.2967/jnumed.119.239327 pmid:31924718
    Abstract/FREE Full Text
  21. 21.↵
    1. Xiang L,
    2. Qiao Y,
    3. Nie D, et al
    . Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI. Neurocomputing 2017;267:406–16 doi:10.1016/j.neucom.2017.06.048 pmid:29217875
    CrossRefPubMed
  22. 22.↵
    1. Chen KT,
    2. Gong E,
    3. de Carvalho Macruz FB, et al
    . Ultra-low-dose 18F-florbetaben amyloid PET imaging using deep learning with multi-contrast MRI inputs. Radiology 2019;290:649–56 doi:10.1148/radiol.2018180940 pmid:30526350
    CrossRefPubMed
  23. 23.↵
    1. Chen KT,
    2. Toueg TN,
    3. Koran ME, et al
    . True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation. Eur J Nucl Med Mol Imaging 2021;48:2416–25 doi:10.1007/s00259-020-05151-9 pmid:33416955
    CrossRefPubMed
  24. 24.↵
    1. Janelidze S,
    2. Mattsson N,
    3. Palmqvist S, et al
    . Plasma P-τ181 in Alzheimer's disease: relationship to other biomarkers, differential diagnosis, neuropathology and longitudinal progression to Alzheimer's dementia. Nat Med 2020;26:379–86 doi:10.1038/s41591-020-0755-1 pmid:32123385
    CrossRefPubMed
  25. 25.↵
    1. Palmqvist S,
    2. Janelidze S,
    3. Quiroz YT, et al
    . Discriminative accuracy of plasma phospho-tau217 for Alzheimer disease vs other neurodegenerative disorders. JAMA 2020;324:772–81 doi:10.1001/jama.2020.12134 pmid:32722745
    CrossRefPubMed
  26. 26.↵
    1. Ossenkoppele R,
    2. Rabinovici GD,
    3. Smith R, et al
    . Discriminative accuracy of [18F] flortaucipir positron emission tomography for Alzheimer disease vs other neurodegenerative disorders. JAMA 2018;320:1151–62 doi:10.1001/jama.2018.12917 pmid:30326496
    CrossRefPubMed
  27. 27.↵
    1. Chen KT,
    2. Adeyeri O,
    3. Toueg TN, et al
    . Generalizing ultra-low-dose PET/MRI networks across radiotracers: from amyloid to tau. In: Proceedings of the Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine, May 15–20, 2021; Virtual
  28. 28.↵
    1. Goodfellow IJ,
    2. Pouget-Abadie J,
    3. Mirza M, et al
    . Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada. December 8–13, 2014:2672–80
  29. 29.↵
    1. Braak H,
    2. Del Tredici K
    . The preclinical phase of the pathological process underlying sporadic Alzheimer's disease. Brain 2015;138:2814–33 doi:10.1093/brain/awv236 pmid:26283673
    CrossRefPubMed
  30. 30.↵
    1. Hanseeuw BJ,
    2. Betensky RA,
    3. Jacobs HIL, et al
    . Association of amyloid and tau with cognition in preclinical Alzheimer disease: a longitudinal study. JAMA Neurol 2019;76:915–24 doi:10.1001/jamaneurol.2019.1424 pmid:31157827
    CrossRefPubMed
  31. 31.↵
    1. Trelle AN,
    2. Carr VA,
    3. Wilson EN, et al
    . Association of CSF biomarkers with hippocampal-dependent memory in preclinical Alzheimer disease. Neurology 2021;96:e1470–81 doi:10.1212/WNL.0000000000011477 pmid:33408146
    Abstract/FREE Full Text
  32. 32.↵
    1. Jenkinson M,
    2. Beckmann CF,
    3. Behrens TE, et al
    . FSL. Neuroimage 2012;62:782–90 doi:10.1016/j.neuroimage.2011.09.015 pmid:21979382
    CrossRefPubMed
  33. 33.↵
    1. Ronneberger O,
    2. Fischer P,
    3. Brox T
    . U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, Munich, Germany. October 5–9, 2015
  34. 34.↵
    1. Wang Z,
    2. Bovik AC,
    3. Sheikh HR, et al
    . Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004;13:600–12 doi:10.1109/tip.2003.819861 pmid:15376593
    CrossRefPubMed
  35. 35.↵
    1. Gwet KL
    . Computing inter-rater reliability and its variance in the presence of high agreement. Br J Math Stat Psychol 2008;61:29–48 doi:10.1348/000711006X126600 pmid:18482474
    CrossRefPubMed
  36. 36.↵
    1. Desikan RS,
    2. Ségonne F,
    3. Fischl B, et al
    . An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 2006;31:968–80 doi:10.1016/j.neuroimage.2006.01.021 pmid:16530430
    CrossRefPubMed
  37. 37.↵
    1. Guo J,
    2. Gong E,
    3. Fan AP, et al
    . Predicting (15)O-Water PET cerebral blood flow maps from multi-contrast MRI using a deep convolutional neural network with evaluation of training cohort bias. J Cereb Blood Flow Metab 2020;40:2240–53 doi:10.1177/0271678X19888123 pmid:31722599
    CrossRefPubMed
  38. 38.↵
    1. Ozlen H,
    2. Pichet Binette A,
    3. Köbe T, et al
    ; Alzheimer's Disease Neuroimaging Initiative, the Harvard Aging Brain Study, the Presymptomatic Evaluation of Experimental or Novel Treatments for Alzheimer Disease Research Group. Spatial extent of amyloid-beta levels and associations with tau-PET and cognition. JAMA Neurol 2022;79:1025–35 doi:10.1001/jamaneurol.2022.2442 pmid:35994280
    CrossRefPubMed
  • Received November 3, 2022.
  • Accepted after revision July 11, 2023.
  • © 2023 by American Journal of Neuroradiology
PreviousNext
Back to top

In this issue

American Journal of Neuroradiology: 44 (9)
American Journal of Neuroradiology
Vol. 44, Issue 9
1 Sep 2023
  • Table of Contents
  • Index by author
  • Complete Issue (PDF)
Advertisement
Print
Download PDF
Email Article

Thank you for your interest in spreading the word on American Journal of Neuroradiology.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Generative Adversarial Network–Enhanced Ultra-Low-Dose [18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations
(Your Name) has sent you a message from American Journal of Neuroradiology
(Your Name) thought you would like to see the American Journal of Neuroradiology web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Cite this article
K.T. Chen, R. Tesfay, M.E.I. Koran, J. Ouyang, S. Shams, C.B. Young, G. Davidzon, T. Liang, M. Khalighi, E. Mormino, G. Zaharchuk
Generative Adversarial Network–Enhanced Ultra-Low-Dose [18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations
American Journal of Neuroradiology Sep 2023, 44 (9) 1012-1019; DOI: 10.3174/ajnr.A7961

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
0 Responses
Respond to this article
Share
Bookmark this article
Ultra-Low-Dose [18F]-PI-2620 PET/MRI in Aging
K.T. Chen, R. Tesfay, M.E.I. Koran, J. Ouyang, S. Shams, C.B. Young, G. Davidzon, T. Liang, M. Khalighi, E. Mormino, G. Zaharchuk
American Journal of Neuroradiology Sep 2023, 44 (9) 1012-1019; DOI: 10.3174/ajnr.A7961
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
Purchase

Jump to section

  • Article
    • Abstract
    • ABBREVIATIONS:
    • MATERIALS AND METHODS
    • RESULTS
    • DISCUSSION
    • CONCLUSIONS
    • Footnotes
    • References
  • Figures & Data
  • Supplemental
  • Info & Metrics
  • Responses
  • References
  • PDF

Related Articles

  • PubMed
  • Google Scholar

Cited By...

  • A machine learning-based prediction of tau load and distribution in Alzheimers disease using plasma, MRI and clinical variables
  • Crossref (4)
  • Google Scholar

This article has been cited by the following articles in journals that are participating in Crossref Cited-by Linking.

  • Current Trends and Applications of PET/MRI Hybrid Imaging in Neurodegenerative Diseases and Normal Aging
    Jonathan Lee, Jonathan Renslo, Kasen Wong, Thomas G. Clifford, Bryce D. Beutler, Paul E. Kim, Ali Gholamrezanezhad
    Diagnostics 2024 14 6
  • A 3D multi-scale CycleGAN framework for generating synthetic PETs from MRIs for Alzheimer&#039;s disease diagnosis
    M. Khojaste-Sarakhsi, Seyedhamidreza Shahabi Haghighi, S.M.T. Fatemi Ghomi, Elena Marchiori
    Image and Vision Computing 2024 146
  • Machine learning prediction of tau‐PET in Alzheimer&#039;s disease using plasma, MRI, and clinical data
    Linda Karlsson, Jacob Vogel, Ida Arvidsson, Kalle Åström, Olof Strandberg, Jakob Seidlitz, Richard A. I. Bethlehem, Erik Stomrud, Rik Ossenkoppele, Nicholas J. Ashton, Henrik Zetterberg, Kaj Blennow, Sebastian Palmqvist, Ruben Smith, Shorena Janelidze, Renaud La Joie, Gil D. Rabinovici, Alexa Pichet Binette, Niklas Mattsson‐Carlgren, Oskar Hansson
    Alzheimer's & Dementia 2025 21 2
  • [18F]PI-2620 Tau PET signal across the aging and Alzheimer’s disease clinical spectrum
    Christina B. Young, Hillary Vossler, America Romero, Viktorija Smith, Jennifer Park, Alexandra N. Trelle, Joseph R. Winer, Edward N. Wilson, Michael M. Zeineh, Sharon J. Sha, Mehdi Khalighi, Maya V. Yutsis, Aimara P. Morales, David Anders, Greg Zaharchuk, Victor W. Henderson, Katrin I. Andreasson, Anthony D. Wagner, Kathleen L. Poston, Guido A. Davidzon, Elizabeth C. Mormino
    Imaging Neuroscience 2024 2

More in this TOC Section

  • Diagnostic Neuroradiology of Monoclonal Antibodies
  • ML for Glioma Molecular Subtype Prediction
  • Segmentation of Brain Metastases with BLAST
Show more Adult Brain

Similar Articles

Advertisement

Indexed Content

  • Current Issue
  • Accepted Manuscripts
  • Article Preview
  • Past Issues
  • Editorials
  • Editors Choice
  • Fellow Journal Club
  • Letters to the Editor

Cases

  • Case Collection
  • Archive - Case of the Week
  • Archive - Case of the Month
  • Archive - Classic Case

Special Collections

  • Special Collections

Resources

  • News and Updates
  • Turn around Times
  • Submit a Manuscript
  • Author Policies
  • Manuscript Submission Guidelines
  • Evidence-Based Medicine Level Guide
  • Publishing Checklists
  • Graphical Abstract Preparation
  • Imaging Protocol Submission
  • Submit a Case
  • Become a Reviewer/Academy of Reviewers
  • Get Peer Review Credit from Publons

Multimedia

  • AJNR Podcast
  • AJNR SCANtastic
  • Video Articles

About Us

  • About AJNR
  • Editorial Board
  • Not an AJNR Subscriber? Join Now
  • Alerts
  • Feedback
  • Advertise with us
  • Librarian Resources
  • Permissions
  • Terms and Conditions

American Society of Neuroradiology

  • Not an ASNR Member? Join Now

© 2025 by the American Society of Neuroradiology All rights, including for text and data mining, AI training, and similar technologies, are reserved.
Print ISSN: 0195-6108 Online ISSN: 1936-959X

Powered by HighWire