Loading...
Automated nanoparticle segmentation in transmission electron microscopy images a synthetic data approach
Date
2025-09-30
Abstract
Transmission Electron Microscopy is pivotal for obtaining structural information at nanometre-scale resolution, with applications such as characterising defects, understanding shape transformation of nanocrystals and measuring nanoparticles properties. To fully comprehend such nanoparticle behaviours and increase reproducibility in research, having an accurate particle characterisation is essential. While manual annotations and traditional software have often been employed for particle measurement and identification, yielding satisfactory outcomes in some scenarios, these methods lack scalability and reproducibility. Advances in software-controlled operations, high-speed imaging and automation in nanoparticle synthesis have enabled instruments to generate terabytes of data per session, which manual analysis cannot accommodate, leading to underutilised data. Machine Learning algorithms for image analysis have shown improvement in accurately segmenting data, with applications on various research fields, such as autonomous driving and medical sciences. The field of TEM also benefits from automated, computational driven pipelines. Amongst other applications, machine learning algorithms have been implemented for automating the nanoparticle segmentation process, but classical algorithms require sufficient, specific, training data and acquiring such datasets, which require domain expertise to label, remains a bottleneck.
To overcome the challenges of manual segmentation and labelling, we introduce a novel framework with synthetic data for automating nanoparticle segmentation. Our method utilises Python-based computational tools to generate synthetic images efficiently. By designing an algorithm that allows for variation in parameters such as magnification, particle size and shape, background, illumination, contrast, noise, and others, we can generalise synthetic dataset creation to cover a wide range of experimental data and facilitate training of an instance segmentation model (Detectron2). The segmented features extracted from the model are used to obtain metadata, such as circularity, eccentricity, minor and major axis, solidity, convexity, aspect ratio and area. To comply with FAIR data principles, this information is incorporated into Datasette, an interactive table based on SQL for organization, easy access and sharing of data and metadata. All the steps in the pipeline are implemented on a graphical user interface (GUI) for usability. The method is validated against different sets of experimental TEM data, demonstrating the models ability to generalise to real-world, noisy data. Furthermore, we explore the framework integration with algorithms to analyse dynamic processes inLiquid-Phase TEM, for example, enabling efficient detection and tracking of nanoparticles over time. Another advancement was the integration of our framework with LiberTEM for live particle segmentation.
Lastly, we explore the use of reinforcement learning to automate microscope alignment, a traditionally manual and expert-driven task. We train a deep Q-learning agent to optimize beam positioning using real-time image feedback, without requiring a digital twin. This approach demonstrates the potential for closed-loop control systems that enhance imaging stability and reduce operator burden.
The structure of the thesis is as follows: Introduction; Literature Review; Mathematical Methods; Synthetic Data Generator; Proof of Concept -Segmenting Experimental Data; Bridging the Gap Between Research and Application - TEMPOS; Video and Live Segmentation; Towards Automated Microscope Alignment with Reinforcement Learning; Conclusion and Further Directions.
Supervisor
Description
Publisher
University of Limerick
Citation
Collections
Files
Funding code
Funding Information
Sustainable Development Goals
External Link
Type
Thesis
Rights
http://creativecommons.org/licenses/by-nc-sa/4.0/
