[Home ] [Archive]    
:: Main :: About :: Current Issue :: Archive :: Search :: Submit :: Contact ::
Main Menu
Home::
IJRR Information::
For Authors::
For Reviewers::
Subscription::
News & Events::
Web Mail::
::
Search in website

Advanced Search
..
Receive site information
Enter your Email in the following box to receive the site news and information.
..
ISSN
Hard Copy 2322-3243
Online 2345-4229
..
Online Submission
Now you can send your articles to IJRR office using the article submission system.
..

AWT IMAGE

AWT IMAGE

Volume 23, Issue 3 (7-2025)                   Int J Radiat Res 2025, 23(3): 797-807 | Back to browse issues page


XML Print


Download citation:
BibTeX | RIS | EndNote | Medlars | ProCite | Reference Manager | RefWorks
Send citation to:

Hosseinpour A, Piranfar A, Harati T, Veyseh S, Soltani M. Clinical applications of generative adversarial networks in medical image to image translation. Int J Radiat Res 2025; 23 (3) :797-807
URL: http://ijrr.com/article-1-6681-en.html
Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran, Iran , msoltani@uwaterloo.ca
Abstract:   (186 Views)
Generative Adversarial Networks (GANs) have emerged as powerful tools within the realm of deep learning, particularly in the synthesis of artificial images, a capability that holds immense promise in the field of medical image-to-image translation. Recent years have witnessed significant strides in GAN development tailored for cross-domain image translation, largely driven by the availability of extensive datasets containing meticulously annotated medical images. Nonetheless, the process of annotating these images poses a formidable challenge, demanding a substantial number of specialized experts for supervised methods. To surmount this obstacle, cross-modality synthesis techniques have gained traction, offering an efficient approach to mitigate the complexities and costs associated with acquiring paired training data. This paper serves as an introductory exploration into the diverse array of GAN variants employed in image-to-image translation, subsequently delving into their applications within medical imaging. Specifically, it investigates the realms of cross-modality synthesis and conditional image synthesis, shedding light on their potential to revolutionize diagnostic precision and streamline the intricacies of medical imaging processes.
Full-Text [PDF 1101 kb]   (94 Downloads)    
Type of Study: Review article | Subject: Radiation Biology

References
1. Soltani M, Moradi Kashkooli F, Souri M, et al. (2021), Enhancing clinical translation of cancer using nanoinformatics. Cancers, 13: 2481. [DOI:10.3390/cancers13102481]
2. Singh NK and Raza K (2021) Medical image generation using generative adversarial networks: a review. Health Informatics, 2021: 77-96. [DOI:10.1007/978-981-15-9735-0_5]
3. Wang L, Chen W, Yang W, et al. (2020) A state-of-the-art review on image synthesis with generative adversarial networks. IEEE Access, 8: 63514-63537. [DOI:10.1109/ACCESS.2020.2982224]
4. Kazeminia S, Baur C, Kuijper A, et al. (2020) GANs for medical image analysis. Artificial Intelligence in Medicine, 2020: 101938. [DOI:10.1016/j.artmed.2020.101938]
5. Zhang Q, Wang H, Lu H, et al. (2018) Medical image synthesis with generative adversarial networks for tissue recognition. 2018 IEEE International Conference on Healthcare Informatics (ICHI), IEEE, 2018, pp. 199-207. [DOI:10.1109/ICHI.2018.00030]
6. Goodfellow I, Pouget-Abadie J, Mirza M, et al. (2014) Generative adversarial nets. Advances in Neural Information Processing Systems, (p./pp. 2672--2680).
7. Yi X, Walia E, Babyn P (2019) Generative adversarial network in medical imaging: A review. Medical Image Analysis, 58: 101552. [DOI:10.1016/j.media.2019.101552]
8. Wang L, Chen W, Yang W, Bi F, Yu F (2020) A State-of-the-art review on image synthesis with generative adversarial networks. IEEE Access, 8: 63514-63537. [DOI:10.1109/ACCESS.2020.2982224]
9. Hiasa Y, Otake Y, Takao M, Matsuoka T, et al. 2018. Cross-Modality Image Synthesis from Unpaired Data Using CycleGAN: Effects of Gradient Consistency Loss and Training Data Size. In Simulation and Synthesis in Medical Imaging: Third International Workshop, SASHIMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings. Springer-Verlag, Berlin, Heidelberg, 31-41. https://doi.org/10.1007/978-3-030-00536-8_4 [DOI:10.1007/978-3-030-00536-8-4.]
10. Yu B, Zhou L, Wang L, et al. (2018) 3D cGAN based cross-modality MR image synthesis for brain tumor segmentation, 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), IEEE, pp. 626-630. [DOI:10.1109/ISBI.2018.8363653]
11. Boni KBB, Klein J, Vanquin L, et al. (2020) MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network. Physics in Medicine and Biology, 65(7), 075002. [DOI:10.1088/1361-6560/ab7633]
12. Huo Y, Xu Z, Moon H, et al. (2019) SynSeg-Net: Synthetic segmentation without target modality ground truth. IEEE Transactions on Medical Imaging, 38: 1016-1025. [DOI:10.1109/TMI.2018.2876633]
13. Ben-Cohen A, Klang E, Raskin SP, et al. (2019) Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion detection. Engineering Applications of Artificial Intelligence, 78: 186-194. [DOI:10.1016/j.engappai.2018.11.013]
14. Choi H and Lee D (2018) Generation of structural MR images from amyloid PET: Application to MR-less quantification. The Journal of Nuclear Medicine, 59: 1111-1117. [DOI:10.2967/jnumed.117.199414]
15. Liu Y, Chen A, Shi H, et al. (2021) CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy. Computerized Medical Imaging and Graphics, 91: 101953. [DOI:10.1016/j.compmedimag.2021.101953]
16. Zhao B, Cheng T, Zhang X, et al. (2023) CT synthesis from MR in the pelvic area using residual transformer conditional GAN. Computerized Medical Imaging and Graphics, 103: 102150. [DOI:10.1016/j.compmedimag.2022.102150]
17. Rubin J and Abulnaga SM (2019) CT-To-MR conditional generative adversarial networks for ischemic stroke lesion segmentation. 2019 IEEE International Conference on Healthcare Informatics (ICHI), 1-7. [DOI:10.1109/ICHI.2019.8904574]
18. Yang H, Lu X, Wang S.H, et al. (2020) Synthesizing multi-contrast MR images via novel 3D conditional variational auto-encoding GAN. Mobile Networks and Applications, 26(1), 415-424. https://doi.org/10.1007/s11036-020-01678-1 [DOI:10.1007/s11036-020-01678-1.]
19. Hu N, Zhang T, Wu Y, et al. (2022) Detecting brain lesions in suspected acute ischemic stroke with CT-based synthetic MRI using generative adversarial networks. Ann Transl Med, 10: 35. [DOI:10.21037/atm-21-4056]
20. Abu-Srhan A, Almallahi I, Abushariah MAM, et al. (2021) Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis. Computers in Biology and Medicine, 136: 104763. [DOI:10.1016/j.compbiomed.2021.104763]
21. Ben-Cohen A, Klang E, Raskin SP, et al. (2017) Virtual PET images from CT data using deep convolutional networks: initial results. International workshop on simulation and synthesis in medical imaging, Springer, 2017, pp. 49-57. [DOI:10.1007/978-3-319-68127-6_6]
22. Li J, Qu Z, Yang Y, et al. (2022) TCGAN: a transformer-enhanced GAN for PET synthetic CT. Biomedical Optics Express, 13: 6003-6018. [DOI:10.1364/BOE.467683]
23. Armanious K, Jiang C, Fischer M, et al. (2020) MedGAN: Medical image translation using GANs, Computerized Medical Imaging and Graphics, 79: 101684. [DOI:10.1016/j.compmedimag.2019.101684]
24. Dong X, Wang T, Lei Y, et al. (2019) Synthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging. Physics in Medicine & Biology, 64: 215016. [DOI:10.1088/1361-6560/ab4eb7]
25. Hu Z, Li Y, Zou S, et al. (2020) Obtaining PET/CT images from non-attenuation corrected PET images in a single PET system using Wasserstein generative adversarial networks. Physics in Medicine & Biology, 65: 215010. [DOI:10.1088/1361-6560/aba5e9]
26. Hu S, Lei B, Wang Y, et al. (2020). Bidirectional Mapping Generative Adversarial Networks for Brain MR to PET Synthesis. ArXiv, abs/2008.03483 (2020).
27. Sikka A, Skand JS, Virk DR (2021) Bathula, MRI to PET cross-modality translation using globally and locally aware GAN (GLA-GAN) for multi-modal diagnosis of alzheimer's disease. ArXiv, abs/2108.02160 (2021).
28. Zhang J, He X, Qing L, et al. (2022) BPGAN: Brain PET synthesis from MRI using generative adversarial network for multi-modal Alzheimer's disease diagnosis. Computer Methods and Programs in Biomedicine, 217: 106676. [DOI:10.1016/j.cmpb.2022.106676]
29. Bazangani F, Richard FJP, Ghattas B, Guedj E (2022) FDG-PET to T1 Weighted MRI translation with 3D elicit generative adversarial network (E-GAN). Sensors, 22: 4640. [DOI:10.3390/s22124640]
30. Liu Y, Lei Y, Wang T, et al. (2020) CBCT‐based synthetic CT generation using deep‐attention cycleGAN for pancreatic adaptive radiotherapy. Medical Physics, 47: 2472-2483. [DOI:10.1002/mp.14121]
31. Liang X, Chen L, Nguyen D, et al. (2019) Generating synthesized computed tomography (CT) from cone-beam computed tomography (CBCT) using CycleGAN for adaptive radiation therapy. Physics in Medicine & Biology, 64: 125002. [DOI:10.1088/1361-6560/ab22f9]
32. Eckl M, Hoppen L, Sarria GR, et al. (2020) Evaluation of a cycle-generative adversarial network-based cone-beam CT to synthetic CT conversion algorithm for adaptive radiation therapy. Physica Medica, 80: 308-316. [DOI:10.1016/j.ejmp.2020.11.007]
33. Liu Y, Lei Y, Wang T, et al. (2020) CBCT‐based synthetic CT generation using deep‐attention cycleGAN for pancreatic adaptive radiotherapy. Medical Physics, 47: 2472-2483. [DOI:10.1002/mp.14121]
34. Zhang Y, Yue N, Su MY, et al. (2020) Improving CBCT quality to CT level using deep‐learning with generative adversarial network. Medical Physics, 48(6), 2816-2826. [DOI:10.1002/mp.14624]
35. Tien H-J, Yang H-C, Shueng P-W, Chen J-C (2021) Cone-beam CT image quality improvement using Cycle-Deblur consistent adversarial networks (Cycle-Deblur GAN) for chest CT imaging in breast cancer patients. Scientific Reports, 11: 1133. [DOI:10.1038/s41598-020-80803-2]
36. Kapil A, Wiestler T, Lanzmich SA, et al. (2019) DASGAN - Joint domain adaptation and segmentation for the analysis of epithelial regions in histopathology PD-L1 images. ArXiv, abs/1906.11118 (2019).
37. Gu Y, Ge Z, Bonnington C, Zhou J (2020) Progressive transfer learning and adversarial domain adaptation for cross-domain skin disease classification. IEEE Journal of Biomedical and Health Informatics, 24: 1379-1393. [DOI:10.1109/JBHI.2019.2942429]
38. Kadambi S, Wang Z, Xing E (2020) WGAN domain adaptation for the joint optic disc-and-cup segmentation in fundus images. Int J Com Assist Radiol Surg, 15: 1205-1213. [DOI:10.1007/s11548-020-02144-9]
39. Dar SU, Yurt M, Shahdloo M, et al. (2020) Prior-guided image reconstruction for accelerated multi-contrast MRI via generative adversarial networks. IEEE Journal of Selected Topics in Signal Processing, 14: 1072-1087. [DOI:10.1109/JSTSP.2020.3001737]
40. Dar SU, Yurt M, Karacan L, et al. (2019) Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE transactions on Medical Imaging, 38: 2375-2388. [DOI:10.1109/TMI.2019.2901750]
41. Yurt M, Dar SU, Erdem A, et al. (2021) Mustgan: Multi-stream generative adversarial networks for MR image synthesis. Medical Image Analysis, 70: 101944. [DOI:10.1016/j.media.2020.101944]
42. Yu B, Zhou L, Wang L, et al. (2019) Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE transactions on Medical Imaging, 38: 1750-1762. [DOI:10.1109/TMI.2019.2895894]
43. Conte GM, Weston AD, Vogelsang DC, et al. (2021) Generative adversarial networks to synthesize missing T1 and FLAIR MRI sequences for use in a multisequence brain tumor segmentation model. Radiology, 299: 313-323. [DOI:10.1148/radiol.2021203786]
44. Olut S, Sahin YH, Demir U, Unal G (2018) Generative adversarial training for MRA image synthesis using multi-contrast MRI. International workshop on predictive intelligence in medicine, Springer, 2018, pp. 147-154. [DOI:10.1007/978-3-030-00320-3_18]
45. Kossen T, Subramaniam P, Madai VI, et al. (2020) Anonymization of labeled TOF-MRA images for brain vessel segmentation using generative adversarial networks. arXiv preprint arXiv:2009.04227, (2020). [DOI:10.1016/j.compbiomed.2021.104254]
46. Nie D, Trullo R, Lian J, et al. (2018) Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering, 65: 2720-2730. [DOI:10.1109/TBME.2018.2814538]
47. Qu L, Wang S, Yap P-T, Shen D (2019) Wavelet-based semi-supervised adversarial learning for synthesizing realistic 7T from 3T MRI. International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, 2019, pp. 786-794. [DOI:10.1007/978-3-030-32251-9_86]
48. Duan C, Bian X, Cheng K, et al. (2023) Synthesized 7T MPRAGE from 3T MPRAGE using generative adversarial network and validation in clinical brain imaging: a feasibility study. Journal of Magnetic Resonance Imaging, 59(5), 1620-1629. [DOI:10.1002/jmri.28944]
49. Oulbacha R and Kadoury S (2020) MRI to CT synthesis of the lumbar spine from a pseudo-3D cycle GAN. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1784-1787. [DOI:10.1109/ISBI45749.2020.9098421]
50. Qian P, Xu K, Wang T, et al. (2020) Estimating CT from MR abdominal images using novel generative adversarial networks. Journal of Grid Computing, 18: 211-226. [DOI:10.1007/s10723-020-09513-3]
51. Brou Boni KND, Klein J, Gulyban A, Reynaert N, Pasquier D (2021) Improving generalization in MR-to-CT synthesis in radiotherapy by using an augmented cycle generative adversarial network with unpaired data. Medical Physics, 48: 3003-3010. [DOI:10.1002/mp.14866]
52. Touati R, Le WT, Kadoury S (2021) A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Physics in Medicine & Biology, 66: 095001. [DOI:10.1088/1361-6560/abf1bb]
53. Kang SK, An HJ, Jin H, et al. (2021) Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy. Biomedical Engineering Letters, 11: 263-271. [DOI:10.1007/s13534-021-00195-8]
54. Jin C-B, Jung W, Joo S, et al. (2019) Deep CT to MR Synthesis Using Paired and Unpaired Data, Sensors, 19(10): 2361. [DOI:10.3390/s19102361]
55. Ma B, Zhao Y, Yang Y, et al. (2020) Graphics, MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation. Computerized medical imaging and graphics : the official journal of the Computerized Medical Imaging Society. 86: 101800. [DOI:10.1016/j.compmedimag.2020.101800]
56. Kawahara D and Nagata Y (2021) T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks. Reports of Practical Oncology and Radiotherapy, 26: 35-42. [DOI:10.5603/RPOR.a2021.0005]
57. Pan K, Cheng P, Huang Z, Lin L, Tang X (2022) Transformer-based T2-weighted MRI synthesis from T1-weighted images, 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), IEEE, pp. 5062-5065. [DOI:10.1109/EMBC48229.2022.9871183]
58. 5Yu B, Zhou L, Wang L, et al. (2019) Ea-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Transactions on Medical Imaging, 38: 1750-1762. [DOI:10.1109/TMI.2019.2895894]
59. Yurt M, Dar SUH, Erdem A, et al. (2021) mustGAN: Multi-stream generative adversarial networks for MR image synthesis. Medical Image Analysis, 70: 101944. [DOI:10.1016/j.media.2020.101944]
60. Zhang Y, Ding S-g, Gong X-c, et al. (2022) Generating synthesized computed tomography from CBCT using a conditional generative adversarial network for head and neck cancer patients. Technology in Cancer Research & Treatment, 21: 15330338221085358. [DOI:10.1177/15330338221085358]
61. Lan L, You L, Zhang Z, et al. (2020) Generative adversarial networks and its applications in biomedical informatics. Frontiers in Public Health, 8: 164. [DOI:10.3389/fpubh.2020.00164]
62. Chang H, Lu J, Yu F, Finkelstein A (2018) Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 40-48. [DOI:10.1109/CVPR.2018.00012]

Add your comments about this article : Your username or Email:
CAPTCHA

Send email to the article author


Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

International Journal of Radiation Research
Persian site map - English site map - Created in 0.06 seconds with 50 queries by YEKTAWEB 4722