Deep Science: Alzheimer's, Drones, Machine Learning & More

The sheer volume of research publications appearing today makes it impossible for any individual to stay fully current, a challenge particularly acute in machine learning. This rapidly evolving discipline now impacts, and generates research within, nearly every sector and organization. This feature intends to gather and present the most significant recent findings and papers – with a primary focus on, but not limited to, artificial intelligence – and clarify their importance.
This installment covers a company employing unmanned aerial vehicles (UAVs) for forest surveying, an examination of how machine learning techniques can be applied to analyze social media connections and forecast Alzheimer’s disease, advancements in computer vision technology for sensors deployed in space, and additional updates on recent technological developments.
Predicting Alzheimer’s through speech patterns
The application of machine learning is increasingly assisting with medical diagnoses, as these technologies excel at recognizing subtle patterns that may be imperceptible to human observation. Researchers at IBM have identified potential indicators within speech characteristics that suggest an individual may be at risk of developing Alzheimer’s disease.
The diagnostic process requires only a short sample – a few minutes – of typical speech recorded during a routine clinical evaluation. Utilizing an extensive dataset, the Framingham Heart Study, which dates back to 1948, the research team was able to pinpoint speech-related patterns present in individuals who were subsequently diagnosed with Alzheimer’s. The system achieves an accuracy rate of approximately 71%, or an area under the curve of 0.74 for those familiar with statistical analysis. While not definitive, this level of accuracy represents a significant improvement over existing preliminary tests, which offer little more than a 50/50 chance of correct prediction at this early stage.
The significance of this development lies in the potential for earlier detection of Alzheimer’s, which in turn allows for more effective disease management. Although a cure remains elusive, various treatments and lifestyle adjustments can help to postpone or lessen the severity of the disease’s symptoms. This rapid, non-invasive assessment of individuals without symptoms could serve as a valuable new screening method and effectively showcases the practical benefits of machine learning technology.
It’s important to note that the research paper does not detail specific speech symptoms easily identifiable in everyday conversation – the identified patterns involve a complex combination of speech features that require specialized analysis.
So-cell networks
A crucial aspect of robust deep learning research involves ensuring a network’s ability to perform well with data it hasn’t been specifically trained on. However, it’s uncommon to see models tested against entirely unfamiliar datasets, a practice that could potentially be quite valuable.
A team of researchers at Uppsala University in Sweden repurposed a model originally designed for identifying communities and relationships within social media platforms, and applied it – with necessary modifications – to analyze tissue scans. These scans featured tissue samples processed to generate images containing numerous small points, each representing mRNA.
Traditionally, identifying and categorizing the various cell groups, representing different tissue types and regions, requires a manual process. However, the graph neural network, initially created to recognize social clusters based on shared characteristics within digital environments, demonstrated an ability to accomplish a comparable task with cells. (Refer to the image above.)
“Our approach utilizes cutting-edge AI techniques – namely, graph neural networks developed for social network analysis – and adapts them to decipher biological patterns and variations within tissue samples. Cells can be viewed as analogous to social groups, defined by the shared activities they exhibit,” explained Carolina Wählby of Uppsala University.
This serves as a compelling demonstration not only of the adaptability of neural networks, but also of the recurring nature of structures and architectures across different scales and in diverse situations. The principle of correspondence – mirroring patterns between macro and micro levels – is clearly illustrated.
Drones in nature
Our nation’s forests and timberlands contain a vast number of trees, but precise quantification is essential for official reporting. Determining growth rates across different areas, assessing tree density and species composition, and monitoring the extent of disease or wildfire damage all require careful estimation. Currently, this process relies on a combination of methods, with aerial photography and scanning providing broad overviews, while detailed on-the-ground observations are time-consuming and limited in scope.
Treeswift is developing a solution by equipping drones with the necessary sensors for both autonomous navigation and precise forest measurements. These drones can traverse forests much more quickly than human observers, enabling efficient tree counting, problem detection, and comprehensive data collection. The company is in its early stages of development, having originated at the University of Pennsylvania and received a Small Business Innovation Research (SBIR) grant from the National Science Foundation.
“As companies increasingly rely on forest resources to address climate change, there’s a growing demand for skilled professionals, which is not being met by current workforce growth,” explains Steven Chen, co-founder and CEO of Treeswift, and a doctoral candidate in Computer and Information Science at Penn Engineering, as reported by Penn News. “My goal is to empower foresters to enhance their productivity. These robotic systems are intended to augment human capabilities, not replace jobs, by providing innovative tools to those with the expertise and dedication to manage our forests effectively.”
Drones are also demonstrating significant advancements in underwater applications. Autonomous underwater vehicles are being utilized to map the ocean floor, monitor ice formations, and track marine life. However, a common limitation of these vehicles is the need for periodic retrieval for recharging and data download.
Nina Mahmoudian, a professor of engineering at Purdue University, has designed a docking system that allows submersibles to connect automatically for power replenishment and data transfer.
An underwater yellow marine robot (left) locates a mobile docking station to recharge and transmit data before resuming its mission. (Purdue University photo/Jared Pike)The submersible requires a specialized nosecone capable of identifying and securely connecting to a docking station. This station can be either an independent autonomous surface vessel or a fixed installation. The key feature is the ability for the submersible to pause operations for recharging and data transmission before continuing its task. This also ensures that valuable data is not lost if the vehicle is ever misplaced – a potential risk in marine environments.
A demonstration of this system can be viewed below:
https://youtu.be/_kS0-qc_r0
Sound in theory
Unmanned aerial vehicles are likely to increasingly populate urban environments, although fully automated personal helicopters remain a future prospect. However, widespread drone usage will inevitably create ongoing noise – therefore, efforts are continually being made to minimize the turbulence and associated sound generated by wings and propellers.
This visual depicts turbulence, and may appear as flames.Scientists at the King Abdullah University of Science and Technology have developed a novel and more effective method for modeling airflow in these scenarios. The complexity of fluid dynamics means the key is to focus computational resources on the most critical aspects of the problem. Their approach involved high-resolution rendering only of the flow immediately surrounding the surface of the conceptual aircraft, as they determined that detailed information beyond a certain distance provided limited value. Enhancements to realistic models do not always require universal improvement – ultimately, the accuracy of the results is the primary concern.
Machine learning in space
Significant advancements have been made in computer vision algorithms, and as their performance increases, they are increasingly being utilized directly on devices rather than relying on centralized data centers. It is now quite typical for devices equipped with cameras, such as smartphones and Internet of Things (IoT) devices, to perform some initial machine learning tasks locally on captured images. However, applying this technology in a space environment presents unique challenges.
Image Credits: CosineUntil recently, the energy demands of performing machine learning operations in space were prohibitively high, making it impractical. That energy could instead be allocated to activities like acquiring additional images or transmitting data back to Earth. HyperScout 2 is currently investigating the feasibility of on-board machine learning for space applications, and its satellite is now employing computer vision methods to analyze images immediately after capture, prior to transmission. (For example, identifying “a cloud, Portugal, or a volcano…”)
Currently, the immediate advantages are limited, but the capability of object detection can be readily integrated with other functionalities to unlock new applications. These include conserving energy by only processing data when objects of interest are detected, and providing supplementary information to other systems to enhance their performance.
Revitalizing the Past with Modern Technology
Machine learning systems excel at forming informed predictions, and they can be particularly beneficial in fields dealing with extensive, unorganized, or inadequately recorded data. Utilizing artificial intelligence for an initial assessment allows researchers and students to focus their efforts more effectively. The Library of Congress is currently employing this approach with historical newspapers, and Carnegie Mellon University’s libraries are now following suit.
Carnegie Mellon University is currently digitizing its collection of one million photographs. However, to make this archive accessible to researchers and the public, it requires organization and tagging. Consequently, computer vision algorithms are being utilized to categorize similar images, recognize objects and places, and perform other essential preliminary cataloging functions.
“Even a partially successful implementation would significantly enhance the collection’s metadata and potentially offer a viable solution for metadata creation should the archives receive funding to digitize the complete collection,” explained Matt Lincoln of CMU.
In a separate but related endeavor, a student at the Escola Politécnica da Universidade de Pernambuco in Brazil conceived a novel application of machine learning: enhancing older maps.
The developed tool utilizes old, hand-drawn maps and attempts to generate a representation resembling a satellite image through the use of a Generative Adversarial Network. GANs operate by attempting to deceive themselves into producing content indistinguishable from authentic data.
Image Credits: Escola Politécnica da Universidade de PernambucoWhile the outcomes aren’t entirely realistic, the project demonstrates potential. These historical maps often lack precision, but they aren’t entirely devoid of meaning. Reconstructing them using contemporary mapping methods is an intriguing concept that could make these locations feel more relatable.
Related Posts

ChatGPT Launches App Store for Developers

Pickle Robot Appoints Tesla Veteran as First CFO

Peripheral Labs: Self-Driving Car Sensors Enhance Sports Fan Experience

Luma AI: Generate Videos from Start and End Frames

Alexa+ Adds AI to Ring Doorbells - Amazon's New Feature
