Hydrogen’s Hidden Phase: Machine Learning Unlocks the Secrets of the Universe’s Most Abundant Element – SciTechDaily
Posted: April 25, 2023 at 12:10 am
Phases of solid hydrogen. The left is the well-studied hexagonal close packed phase, while the right is the new phase predicted by the authors machine learning-informed simulations. Image by Wesley Moore. Credit: The Grainger College of Engineering at the University of Illinois Urbana-Champaign
Putting hydrogen on solid ground: simulations with a machine learning model predict a new phase of solid hydrogen.
A machine-learning technique developed by University of Illinois Urbana-Champaign researchers has revealed a previously undiscovered high-pressure solid hydrogen phase, offering insights into hydrogens behavior under extreme conditions and the composition of gaseous planets like Jupiter and Saturn.
Hydrogen, the most abundant element in the universe, is found everywhere from the dust filling most of outer space to the cores of stars to many substances here on Earth. This would be reason enough to study hydrogen, but its individual atoms are also the simplest of any element with just one proton and one electron. For David Ceperley, a professor of physics at the University of Illinois Urbana-Champaign, this makes hydrogen the natural starting point for formulating and testing theories of matter.
Ceperley, also a member of the Illinois Quantum Information Science and Technology Center, uses computer simulations to study how hydrogen atoms interact and combine to form different phases of matter like solids, liquids, and gases. However, a true understanding of these phenomena requires quantum mechanics, and quantum mechanical simulations are costly. To simplify the task, Ceperley and his collaborators developed a machine-learning technique that allows quantum mechanical simulations to be performed with an unprecedented number of atoms. They reported in Physical Review Letters that their method found a new kind of high-pressure solid hydrogen that past theory and experiments missed.
Machine learning turned out to teach us a great deal, Ceperley said. We had been seeing signs of new behavior in our previous simulations, but we didnt trust them because we could only accommodate small numbers of atoms. With our machine learning model, we could take full advantage of the most accurate methods and see whats really going on.
Hydrogen atoms form a quantum mechanical system, but capturing their full quantum behavior is very difficult even on computers. A state-of-the-art technique like quantum Monte Carlo (QMC) can feasibly simulate hundreds of atoms, while understanding large-scale phase behaviors requires simulating thousands of atoms over long periods of time.
To make QMC more versatile, two former graduate students, Hongwei Niu and Yubo Yang, developed a machine learning model trained with QMC simulations capable of accommodating many more atoms than QMC by itself. They then used the model with postdoctoral research associate Scott Jensen to study how the solid phase of hydrogen that forms at very high pressures melts.
The three of them were surveying different temperatures and pressures to form a complete picture when they noticed something unusual in the solid phase. While the molecules in solid hydrogen are normally close-to-spherical and form a configuration called hexagonal close packedCeperley compared it to stacked orangesthe researchers observed a phase where the molecules become oblong figuresCeperley described them as egg-like.
We started with the not-too-ambitious goal of refining the theory of something we know about, Jensen recalled. Unfortunately, or perhaps fortunately, it was more interesting than that. There was this new behavior showing up. In fact, it was the dominant behavior at high temperatures and pressures, something there was no hint of in older theory.
To verify their results, the researchers trained their machine learning model with data from density functional theory, a widely used technique that is less accurate than QMC but can accommodate many more atoms. They found that the simplified machine learning model perfectly reproduced the results of standard theory. The researchers concluded that their large-scale, machine learning-assisted QMC simulations can account for effects and make predictions that standard techniques cannot.
This work has started a conversation between Ceperleys collaborators and some experimentalists. High-pressure measurements of hydrogen are difficult to perform, so experimental results are limited. The new prediction has inspired some groups to revisit the problem and more carefully explore hydrogens behavior under extreme conditions.
Ceperley noted that understanding hydrogen under high temperatures and pressures will enhance our understanding of Jupiter and Saturn, gaseous planets primarily made of hydrogen. Jensen added that hydrogens simplicity makes the substance important to study. We want to understand everything, so we should start with systems that we can attack, he said. Hydrogen is simple, so its worth knowing that we can deal with it.
Reference: Stable Solid Molecular Hydrogen above 900 K from a Machine-Learned Potential Trained with Diffusion Quantum Monte Carlo by Hongwei Niu, Yubo Yang, Scott Jensen, Markus Holzmann, Carlo Pierleoni and David M. Ceperley, 17 February 2023, Physical Review Letters.DOI: 10.1103/PhysRevLett.130.076102
This work was done in collaboration with Markus Holzmann of Univ. Grenoble Alpes and Carlo Pierleoni of the University of LAquila. Ceperleys research group is supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Computational Materials Sciences program under Award DE-SC0020177.
Read more:
Machine learning: As AI tools gain heft, the jobs that could be at stake – The Indian Express
Posted: at 12:10 am
Watch out for the man with the silicon chipHold on to your job with a good firm gripCause if you dont youll have had your chipsThe same as my old man
Scottish revival singer-songwriter Ewan MacColls 1986 track My Old Man was an ode to his father, an iron-moulder who faced an existential threat to his job because of the advent of technology. The lyrics could finds some resonance nearly four decades on, as industry leaders and tech stalwarts predict the advancement in large language models such as OpenAIs GPT-4 and their ability to write essays, code, and do maths with greater accuracy and consistency, heralding a fundamental tech shift; almost as significant as the creation of the integrated circuit, the personal computer, the web browser or the smartphone. But there still are question marks over how advanced chatbots could impact the job market. And if the blue collar work was the focus of MacColls ballad, artificial intelligence (AI) models of the generative pretrained transformer type signify a greater threat for white collar workers, as more powerful word-predicting neural networks that manage to carry out a series of operations on arrays of inputs end up producing output that is significantly humanlike. So, will this latest wave impact the current level of employment?
According to Goldman Sachs economists Joseph Briggs and Devesh Kodnani, the answer is a resounding yes, and they predict that as many as 300 million full-time jobs around the world are set to get automated, with workers replaced by machines or AI systems. What lends credence to this stark prediction is the new wave of AI, especially large language models that include neural networks such as Microsoft-backed OPenAIs ChatGPT.
The Goldman Sachs economists predict that such technology could bring significant disruption to the labour market, with lawyers, economists, writers, and administrative staff among those projected to be at greatest risk of becoming redundant. In a new report, The Potentially Large Effects of Artificial Intelligence on Economic Growth, they calculate that approximately two-thirds of jobs in the US and Europe are set to be exposed to AI automation, to various degrees.
In general white-collar workers, and workers in advanced economies in general, are projected to be at a greater risk than blue collar workers in developing countries. The combination of significant labour cost savings, new job creation, and a productivity boost for non-displaced workers raises the possibility of a labour productivity boom like those that followed the emergence of earlier general-purpose technologies like the electric motor and personal computer, the report said.
And OpenAI itself predicts that a vast majority of workers will have at least part of their jobs automated by GPT models. In a study published on the arXiv preprint server, researchers from OpenAI and the University of Pennsylvania said that 80 percent of the US workforce could have at least 10 percent of their tasks affected by the introduction of GPTs.
Central to these predictions is the way models such as ChatGPT get better with more usage GPT stands for Generative Pre-trained Transformer and is a marker for how the platform works; being pre-trained by human developers initially and then primed to learn for itself as more and more queries are posed by users to it. The OpenAI study also said that around 19 per cent of US workers will see at least 50 per cent of their tasks impacted, with the qualifier that GPT exposure is likely greater for higher-income jobs, but spans across almost all industries. These models, the OpenAI study said, will end up as general-purpose technologies like the steam engine or the printing press.
A January 2023 paper, by Anuj Kapoor of the Indian Institute of Management Ahmedabad and his co-authors, explored the question of whether AI tools or humans were more effective at helping people lose weight. The authors conducted the first causal evaluation of the effectiveness of human vs. AI tools in helping consumers achieve their health outcomes in a real-world setting by comparing the weight loss outcomes achieved by users of a mobile app, some of whom used only an AI coach while others used a human coach as well.
Interestingly, while human coaches scored higher broadly, users with a higher BMI did not fare as well with a human coach as those who weighed less.
The results of our analysis can extend beyond the narrow domain of weight loss apps to that of healthcare domains more generally. We document that human coaches do better than AI coaches in helping consumers achieve their weight loss goals. Importantly, there are significant differences in this effect across different consumer groups. This suggests that a one-size-fits-all approach might not be most effective Kapoor told The Indian Express.
The findings: Human coaches help consumers achieve their goals better than AI coaches for consumers below the median BMI relative to consumers who have above-median BMI. Human coaches help consumers achieve their goals better than AI coaches for consumers below the median age relative to consumers who have above-median age.
Human coaches help consumers achieve their goals better than AI coaches for consumers below the median time in a spell relative to consumers who spent above-median time in a spell. Further, human coaches help consumers achieve their goals better than AI coaches for female consumers relative to male consumers.
While Kapoor said the paper did not go deeper into the why of the effectiveness of AI+Human plans for low BMI individuals over high BMI individuals, he speculated on what could be the reasons for that trend: Humans can feel emotions like shame and guilt while dealing with other humans. This is not always true, but in general and theres ample evidence to suggest this research has shown that individuals feel shameful while purchasing contraceptives and also while consuming high-calorie indulgent food items. Therefore, high BMI individuals might find it difficult to interact with other human coaches. This doesnt mean that health tech platforms shouldnt suggest human plans for high BMI individuals. Instead, they can focus on (1) Training their coaches well to make the high BMI individuals feel comfortable and heard and (2) deciding the optimal mix of the AI and Human components of the guidance for weight loss, he added.
Similarly, the female consumers responding well to the human coaches can be attributed to the recent advancements in the literature on Human AI interaction, which suggests that the adoption of AI is different for females/males and also theres differential adoption across ages, Kapoor said, adding that this can be a potential reason for the differential impact of human coaches for females over males.
An earlier OECD paper on AI and employment titled New Evidence from Occupations most exposed to AI asserted that the impact of these tools would be skewed in favour of high-skilled, white-collar ones, including: business professionals; managers; science and engineering professionals; and legal, social and cultural professionals.
This contrasts with the impact of previous automating technologies, which have tended to take over primarily routine tasks performed by lower-skilled workers. The 2021 study noted that higher exposure to AI may be a good thing for workers, as long as they have the skills to use these technologies effectively. The research found that over the period 2012-19, greater exposure to AI was associated with higher employment in occupations where computer use is high, suggesting that workers who have strong digital skills may have a greater ability to adapt to and use AI at work and, hence, to reap the benefits that these technologies bring. By contrast, there is some indication that higher exposure to AI is associated with lower growth in average hours worked in occupations where computer use is low. On the whole, the study findings suggested that the adoption of AI may increase labour market disparities between workers who have the skills to use AI effectively and those who do not. Making sure that workers have the right skills to work with new technologies is therefore a key policy challenge, which policymakers will increasingly have to grapple with.
The rest is here:
Machine learning: As AI tools gain heft, the jobs that could be at stake - The Indian Express
New machine-learning method predicts body clock timing to improve sleep and health decisions – Medical Xpress
Posted: at 12:10 am
This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:
fact-checked
peer-reviewed publication
trusted source
proofread
Credit: Pixabay/CC0 Public Domain
A new machine-learning method could help us gauge the time of our internal body clock, helping us all make better health decisions, including when and how long to sleep.
The research, which has been conducted by the University of Surrey and the University of Groningen, used a machine learning program to analyze metabolites in blood to predict the time of our internal circadian timing system. The study is published in the journal Proceedings of the National Academy of Sciences.
To date the standard method to determine the timing of the circadian system is to measure the timing of our natural melatonin rhythm, specifically when we start producing melatonin, known as dim light melatonin onset (DLMO).
Professor Debra Skene, co-author of the study from the University of Surrey, said, "After taking two blood samples from our participants, our method was able to predict the DLMO of individuals with an accuracy comparable or better than previous, more intrusive estimation methods."
The research team collected a time-series of blood samples from 24 individuals12 men and 12 women. All participants were healthy, did not smoke and had regular sleeping schedules seven days before they visited the University clinical research facility. The research team then measured over 130 metabolite rhythms using a targeted metabolomics approach. This metabolite data was then used in a machine learning program to predict circadian timing.
Professor Skene stated, "We are excited but cautious about our new approach to predicting DLMOas it is more convenient and requires less sampling than the tools currently available. While our approach needs to be validated in different populations, it could pave the way to optimize treatments for circadian rhythm sleep disorders and injury recovery.
"Smart devices and wearables offer helpful guidance on sleep patternsbut our research opens the way to truly personalized sleep and meal plans, aligned to our personal biology, with the potential to optimize health and reduce the risks of serious illness associated with poor sleep and mistimed eating."
Professor Roelof Hut, co-author of the study from University of Groningen, said, "Our results could help to develop an affordable way to estimate our own circadian rhythms that will optimize the timing of behaviors, diagnostic sampling, and treatment."
More information: Woelders, Tom et al, Machine learning estimation of human body time using metabolomic profiling, Proceedings of the National Academy of Sciences (2023). DOI: 10.1073/pnas.2212685120
Journal information: Proceedings of the National Academy of Sciences
Read more:
Wallaroo.ai partners with VMware on machine learning at the edge – SiliconANGLE News
Posted: at 12:10 am
Machine learning startup Wallaroo Labs Inc., better known as Wallaroo.ai, said today its partnering with the virtualization software giant VMware Inc. to create a unified edge machine learning and artificial intelligence deployment and operations platform thats aimed at communications service providers.
Wallaroo.ai is the creator of a unified platform for easily deploying, observing and optimizing machine learning in production, on any cloud, on-premises or at the network edge. The company says its joining with VMware to help CSPs better make money from their networks by supporting them with scalable machine learning at the edge.
Its aiming to solve the problem of managing edge machine learning through easier deployment, more efficient inference and continuous optimization of models at 5G edge locations and in distributed networks. CSPs will also benefit from a unified operations center that allows them to observe, manage and scale up edge machine learning deployments from one place.
More specifically, Wallaroo.ai said, its new offering will make it simple to deploy AI models trained in one environment in multiple resource-constrained edge endpoints, while providing tools to help test and continuously optimize those models in production. Benefits include automated observability and drift detection, so users will know if their models start to generate inaccurate responses or predictions. It also offers integration with popular ML development environments, such as Databricks, and cloud platforms such as Microsoft Azure.
Wallaroo.ai co-founder and Chief Executive Vid Jain told SiliconANGLE that CSPs are specifically looking for help in deploying machine learning models fortasks such as monitoring network health, network optimization, predictive maintenance and security. Doing so is difficult, he says, because the models have a number of requirements, including the need for very efficient compute at the edge.
At present, most edge locations are constrained by low-powered compute resources, low memory and low-latency. In addition, CSPs need the ability to deploy the models at many edge endpoints simultaneously, and they also need a way to monitor those endpoints.
We offer CSPs a highly efficient, trust-based inference server that is ideally suited for fast edge inferencing, together with a single unified operations center, Jain explained. We are also working on integrating orchestration software such as VMware that allows for monitoring, updating and management of all the edge locations running AI. The Wallaroo.AI server and models can be deployed into telcos 5G infrastructure and bring back any monitoring data to a central hub.
Stephen Spellicy, vice president of service provider marketing, enablement and business development at VMware, said the partnership is all about helping telecommunications companies put machine learning to work in distributed environments more easily. Machine learning at the edge has multiple use cases, he explained, such as better securing and optimizing distributed networks and providing low-latency services to businesses and consumers.
Wallaroo.ai said its platform will be able to operate across multiple clouds, radio access networks and edge environments, which it believes will become the primary elements of a future, low-latency and highly distributed internet.
TheCUBEis an important partner to the industry, you know,you guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy
THANK YOU
See the article here:
Wallaroo.ai partners with VMware on machine learning at the edge - SiliconANGLE News
Sliding Out of My DMs: Young Social Media Users Help Train … – Drexel University
Posted: at 12:10 am
In a first-of-its-kind effort, social media researchers from Drexel University, Vanderbilt University, Georgia Institute of Technology and Boston University are turning to young social media users to help build a machine learning program that can spot unwanted sexual advances on Instagram. Trained on data from more than 5 million direct messages annotated and contributed by 150 adolescents who had experienced conversations that made them feel sexually uncomfortable or unsafe the technology can quickly and accurately flag risky DMs.
The project, which was recently published by the Association for Computing Machinery in its Proceedings of the ACM on Human-Computer Interaction, is intended to address concerns that an increase of teens using social media, particularly during the pandemic, is contributing to rising trends of child sexual exploitation.
In the year 2020 alone, the National Center for Missing and Exploited Children received more than 21.7 million reports of child sexual exploitation which was a 97% increase over the year prior. This is a very real and terrifying problem, said Afsaneh Razi, PhD, an assistant professor in Drexels College of Computing & Informatics, who was a leader of the research.
Social media companies are rolling out new technology that can flag and remove sexually exploitative images and helps users to more quickly report these illegal posts. But advocates are calling for greater protection for young users that could identify and curtail these risky interactions sooner.
The groups efforts are part of a growing field of research looking at how machine learning and artificial intelligence be integrated into platforms to help keep young people safe on social media, while also ensuring their privacy. Its most recent project stands apart for its collection of a trove of private direct messages from young users, which the team used to train a machine learning-based program that is 89% accurate at detecting sexually unsafe conversations among teens on Instagram.
Most of the research in this area uses public datasets which are not representative of real-word interactions that happen in private, Razi said. Research has shown that machine learning models based on the perspectives of those who experienced the risks, such as cyberbullying, provide higher performance in terms of recall. So, it is important to include the experiences of victims when trying to detect the risks.
Each of the 150 participants who range in age from 13- to 21-years-old had used Instagram for at least three months between the ages of 13 and 17, exchanged direct messages with at least 15 people during that time, and had at least two direct messages that made them or someone else feel uncomfortable or unsafe. They contributed their Instagram data more than 15,000 private conversations through a secure online portal designed by the team. And were then asked to review their messages and label each conversation, as safe or unsafe, according to how it made them feel.
Collecting this dataset was very challenging due to sensitivity of the topic and because the data is being contributed by minors in some cases, Razi said. Because of this, we drastically increased the precautions we took to preserve confidentiality and privacy of the participants and to ensure that the data collection met high legal and ethical standards, including reporting child abuse and the possibility of uploads of potentially illegal artifacts, such as child abuse material.
The participants flagged 326 conversations as unsafe and, in each case, they were asked to identify what type of risk it presented nudity/porn, sexual messages, harassment, hate speech, violence/threat, sale or promotion of illegal activities, or self-injury and the level of risk they felt either high, medium or low.
This level of user-generated assessment provided valuable guidance when it came to preparing the machine learning programs. Razi noted that most social media interaction datasets are collected from publicly available conversations, which are much different than those held in private. And they are typically labeled by people who were not involved with the conversation, so it can be difficult for them to accurately assess the level of risk the participants felt.
With self-reported labels from participants, we not only detect sexual predators but also assessed the survivors perspectives of the sexual risk experience, the authors wrote. This is a significantly different goal than attempting to identify sexual predators. Built upon this real-user dataset and labels, this paper also incorporates human-centered features in developing an automated sexual risk detection system.
Specific combinations of conversation and message features were used as the input of the machine learning models. These included contextual features, like age, gender and relationship of the participants; linguistic features, such as wordcount, the focus of questions, or topics of the conversation; whether it was positive, negative or neutral; how often certain terms were used; and whether or not a set of 98 pre-identified sexual-related words were used.
This allowed the machine learning programs to designate a set of attributes of risky conversations, and thanks to the participants assessments of their own conversations, the program could also rank the relative level of risk.
The team put its model to the test against a large set of public sample conversations created specifically for sexual predation risk-detection research. The best performance came from its Random Forest classifier program, which can rapidly assign features to sample conversations and compare them to known sets that have reached a risk threshold. The classifier accurately identified 92% of unsafe sexual conversations from the set. It was also 84% accurate at flagging individual risky messages.
By incorporating its user-labeled risk assessment training, the models were also able to tease out the most relevant characteristics for identifying an unsafe conversation. Contextual features, such as age, gender and relationship type, as well as linguistic inquiry and wordcount contributed the most to identifying conversations that made young users feel unsafe, they wrote.
This means that a program like this could be used to automatically warn users, in real-time, when a conversation has become problematic, as well as to collect data after the fact. Both of these applications could be tremendously helpful in risk prevention and the prosecution of crimes, but the authors caution that their integration into social media platforms must preserve the trust and privacy of the users.
Social service providers find value in the potential use of AI as an early detection system for risks, because they currently rely heavily on youth self-reports after a formal investigation had occurred, Razi said. But these methods must be implemented in a privacy-preserving matter to not harm the trust and relationship of the teens with adults. Many parental monitoring apps are privacy invasive since they share most of the teen's information with parents, and these machine learning detection systems can help with minimal sharing of information and guidelines to resources when it is needed.
They suggest that if the program is deployed as a real-time intervention, then young users should be provided with a suggestion rather than an alert or automatic report and they should be able to provide feedback to the model and make the final decision.
While the groundbreaking nature of its training data makes this work a valuable contribution to the field of computational risk detection and adolescent online safety research, the team notes that it could be improved by expanding the size of the sample and looking at users of different social media platforms. The training annotations for the machine learning models could also be revised to allow outside experts to rate the risk of each conversation.
The group plans to continue its work and to further refine its risk detection models. It has also created an open-source community to safely share the data with other researchers in the field recognizing how important it could be for the protection of this vulnerable population of social media users.
The core contribution of this work is that our findings are grounded in the voices of youth who experienced online sexual risks and were brave enough to share these experiences with us, they wrote. To the best of our knowledge, this is the first work that analyzes machine learning approaches on private social media conversations of youth to detect unsafe sexual conversations.
This research was supported by the U.S. National Science Foundation and the William T. Grant Foundation.
In addition to Razi, Ashwaq Alsoubai and Pamela J. Wisniewski, from Vanderbilt University; Seunghyun Kim and Munmun De Choudhury, from Georgia Institute of Technology; and Shiza Ali and Gianluca Stringhini, from Boston University, contributed to the research.
Read the full paper here: https://dl.acm.org/doi/10.1145/3579522
Read more:
Sliding Out of My DMs: Young Social Media Users Help Train ... - Drexel University
Machine Learning Has Value, but It’s Still Just a Tool – MedCity News
Posted: at 12:10 am
Machine learning (ML) has exciting potential for a constellation of uses in clinical trials. But hype surrounding the term may build expectations that ML is not equipped to deliver. Ultimately, ML is a tool, and like any tool, its value will depend on how well users understand and manage its strengths and weaknesses. A hammer is an effective tool for pounding nails into boards, after all, but it is not the best option if you need to wash a window.
ML has some obvious benefits as a way to quickly evaluate large, complex datasets and give users a quick initial read. In some cases, ML models can even identify subtleties that humans might struggle to notice, and a stable ML model will consistently and reproducibly generate similar results, which can be both a strength and a weakness.
ML can also be remarkably accurate, assuming the data used to train the ML model was accurate and meaningful. Image recognition ML models are being widely used in radiology with excellent results, sometimes catching things missed by even the most highly trained human eye.
This doesnt mean ML is ready to replace clinicians judgment or take their jobs, but results so far offer compelling evidence that ML may have value as a tool to augment their clinical judgment.
A tool in the toolbox
That human factor will remain important, because even as they gain sophistication, ML models will lack the insight clinicians build up over years of experience. As a result, subtle differences in one variable may cause the model to miss something important (false negatives), or overstate something that is not important (false positives).
There is no way to program for every possible influence on the available data, and there will inevitably be a factor missing from the dataset. As a result, outside influences such as a person moving during ECG collection, suboptimal electrode connection, or ambient electrical interference may introduce variability that ML is not equipped to address. In addition, ML wont recognize if there is an error such as an end user entering an incorrect patient identifier, but because ECG readings are unique like fingerprints a skilled clinician might realize that the tracing they are looking at does not match what they have previously seen from the same patient, prompting questions about who the tracing actually belongs to.
In other words, machines are not always wrong, but they are also not always right. The best results come when clinicians use ML to complement, not supplant, their own efforts.
Maximizing ML
Clinicians who understand how to effectively implement ML in clinical trials can benefit from what it does well. For example:
The value of ML will continue to grow as algorithms improve and computing power increases, but there is little reason to believe it will ever replace human clinical oversight. Ultimately, ML provides objectivity and reproducibility in clinical trials, while humans provide subjectivity and can contribute knowledge about factors the program does not take into account. Both are needed. And while MLs ability to flag data inconsistencies may reduce some workload, those predictions still must be verified.
There is no doubt that ML has incredible potential for clinical trials. Its power to quickly manage and analyze large quantities of complex data will save study sponsors money and improve results. However, it is unlikely to completely replace human clinicians for evaluating clinical trial data because there are too many variables and potential unknowns. Instead, savvy clinicians will continue to contribute their expertise and experience to further develop ML platforms to reduce repetitive and tedious tasks with a high degree of reliability and a low degree of variability, which will allow users to focus on more complex tasks.
Photo: Gerd Altmann, Pixabay
Read more from the original source:
Machine Learning Has Value, but It's Still Just a Tool - MedCity News
New Machine Learning Parameterization Tested on Atmospheric … – Eos
Posted: at 12:10 am
Editors Highlights are summaries of recent papers by AGUs journal editors.Source: Journal of Advances in Modeling Earth Systems
Atmospheric models must represent processes on spatial scales spanning many orders of magnitude. Although small-scale processes such as thunderstorms and turbulence are critical to the atmosphere, most global models cannot explicitly resolve them due to computational expense. In conventional models, heuristic estimates of the effect of these processes, known as parameterizations, are designed by experts. A recent line of research uses machine learning to create data-driven parameterizations directly from very high-resolution simulations that require fewer assumptions.
Yuval and OGorman [2023] provide the first such example of a neural network parameterization of the effects of subgrid processes on the vertical transport of momentum in the atmosphere. A careful approach is taken to generate a training dataset, accounting for subtle issues in the horizontal grid of the high-resolution model. The new parameterization generally improves the simulation of winds in a coarse-resolution model, but also over-corrects and leads to larger biases in one configuration. The study serves as a complete and clear example for researchers interested in the application of machine learning for parameterization.
Citation: Yuval, J., & OGorman, P. A. (2023). Neural-network parameterization of subgrid momentum transport in the atmosphere. Journal of Advances in Modeling Earth Systems, 15, e2023MS003606. https://doi.org/10.1029/2023MS003606
Oliver Watt-Meyer, Associate Editor, JAMES
Related
Originally posted here:
New Machine Learning Parameterization Tested on Atmospheric ... - Eos
How AI, automation, and machine learning are upgrading clinical trials – Clinical Trials Arena
Posted: at 12:10 am
Artificial intelligence (AI) is set to be the most disruptive emerging technology in drug development in 2023, unlocking advanced analytics, enabling automation, and increasing speed across the clinical trial value chain.
Todays clinical trials landscape is being shaped by macro trends that include the Covid-19 pandemic, geopolitical uncertainty, and climate pressures. Meanwhile, advancements in adaptive design, personalisation and novel treatments mean that clinical trials are more complex than ever. Sponsors seek greater agility and faster time to commercialisation while maintaining quality and safety in an evolving global market. Across every stage of clinical research, AI offers optimisation opportunities.
A new whitepaper from digital technology solutions provider Taimei examines the transformative impact of AI on the clinical trials of today and explores how it will shape the future.
The big delay areas are always patient recruitment, site start-up, querying, data review, and data cleaning, explains Scott Clark, chief commercial officer at Taimei.
Patient recruitment is typically the most time-consuming stage of a clinical trial. Sponsors must find and identify a set of subjects, gather information, and use inclusion/exclusion criteria to filter and select participants. And high-quality patient recruitment is vital to a trials success.
Once patients are recruited, they must be managed effectively. Patient retention has a direct impact on the quality of the trials results, so their management is crucial. In todays clinical trials, these patients can be distributed over more than a hundred sites and across multiple geographies, presenting huge data management challenges for sponsors.
AI can be leveraged across patient recruitment and management to boost efficiency, quality, and retention. Algorithms can gather subject information and screen and filter potential participants. They can analyse data sources such as medical records and even social media content to detect subgroups and geographies that may be relevant to the trial. AI can also alert medical staff and patients to clinical trial opportunities.
The result? Faster, more efficient patient recruitment, with the ability to reach more diverse populations and more relevant participants, as well as increase quality and retention. [Using AI], you can develop the correct cohort, explains Clark. Its about accuracy, efficiency, and safety.
Study build can be a laborious and repetitive process. Typically, data managers must read the study protocol and generate as many as 50-60 case report forms (CRFs). Each trial has different CRF requirements. CRF design and database building can take weeks and has a direct impact on the quality and accuracy of the clinical trial.
Enter AI. Automated text reading can parse, categorise, and stratify corpora of words to automatically generate eCRFs and the data capture matrix. In study building, AI is able to read the protocols and pull the best CRF forms for the best outcomes, adds Clark.
It can then use the data points from the CRFs to build the study base, creating the whole database in a matter of minutes rather than weeks. The database is structured for export to the biostatisticians programming. AI can then facilitate the analysis of data and develop all of the required tables, listings and figures (TLFs). It can even come to a conclusion on the outcomes, pending review.
Optical character recognition (OCR) can address structured and unstructured native documents. Using built-in edit checks, AI can reduce the timeframe for study build from ten weeks to just one, freeing up data managers time. We are able to do up to 168% more edit checks than are done currently in the human manual process, says Clark. AI can also automate remote monitoring to identify outliers and suggest the best route of action, to be taken with approval from the project manager.
AI data management is flexible, agile, and robust. Using electronic data capture (EDC) removes the need to manage paper-based documentation. This is essential for modern clinical trials, which can present huge amounts of unstructured data thanks to the rise of advances such as decentralisation, wearables, telemedicine, and self-reporting.
Once the trial is launched, you can use AI to do automatic querying and medical coding, says Clark. When theres a piece of data that doesnt make sense or is not coded, AI can flag it and provide suggestions automatically. The data manager just reviews what its corrected, adds Clark. Thats a big time-saver. By leveraging AI throughout data input, sponsors also cut out the lengthy process of data cleaning at the end of a trial.
Implementing AI means establishing the proof of concept, building a customised knowledge base, and training the model to solve the problem on a large scale. Algorithms must be trained on large amounts of data to remove bias and ensure accuracy. Today, APIs enable best-in-class advances to be integrated into clinical trial applications.
By taking repetitive tasks away from human personnel, AI accelerates the time to market for life-saving drugs and frees up man-hours for more specialist tasks. By analysing past and present trial data, AI can be used to inform future research, with machine learning able to suggest better study design. In the long term, AI has the potential to shift the focus away from trial implementation and towards drug discovery, enabling improved treatments for patients who need them.
To find out more, download the whitepaper below.
Originally posted here:
How AI, automation, and machine learning are upgrading clinical trials - Clinical Trials Arena
Application od Machine Learning in Cybersecurity – Read IT Quik
Posted: at 12:10 am
The most crucial aspect of every business is its cybersecurity. It aids in ensuring the security and safety of their data. Artificial intelligence and machine learning are in high demand, changing the cybersecurity industry as a whole. Cybersecurity may benefit greatly from machine learning, which can be used to better available antivirus software, identify cyber dangers, and battle online crime. With the increasing sophistication of cyber threats, companies are constantly looking for innovative ways to protect their systems and data. Machine learning is one emerging technology that is making waves in cybersecurity. Cybersecurity professionals can now detect and mitigate cyber threats more effectively by leveraging artificial intelligence and machine learning algorithms. This article will delve into key areas where machine learning is transforming the security landscape.
One of the biggest challenges in cybersecurity is accurately identifying legitimate connection requests and suspicious activities within a companys systems. With thousands of requests pouring in constantly, human analysis can fall short. This is where machine learning can play a crucial role. AI-powered cyber threat identification systems can monitor incoming and outgoing calls and requests to the system to detect suspicious activity. For instance, there are many companies that offer cybersecurity software that utilizes AI to analyze and flag potentially harmful activities, helping security professionals stay ahead of cyber threats.
Traditional antivirus software relies on known virus and malware signatures to detect threats, requiring frequent updates to keep up with new strains. However, machine learning can revolutionize this approach. ML-integrated antivirus software can identify viruses and malware based on their abnormal behavior rather than relying solely on signatures. This enables the software to detect not only known threats but also newly created ones. For example, companies like Cylance have developed smart antivirus software that uses ML to learn how to detect viruses and malware from scratch, reducing the dependence on signature-based detection.
Cyber threats can often infiltrate a companys network by stealing user credentials and logging in with legitimate credentials. It can be challenging to detect with traditional methods. However, machine learning algorithms can analyze user behavior patterns to identify anomalies. By training the algorithm to recognize each users standard login and logout patterns, any deviation from these patterns can trigger an alert for further investigation. For instance, Darktrace offers cybersecurity software that uses ML to analyze network traffic information and identify abnormal user behavior patterns.
Machine learning offers several advantages in the field of cyber security. First and foremost, it enhances accuracy by analyzing vast amounts of data in real time, helping to identify potential threats promptly. ML-powered systems can also adapt and evolve as new threats emerge, making them more resilient against rapidly growing cyber-attacks. Moreover, ML can provide valuable insights and recommendations to cybersecurity professionals, helping them make informed decisions and take proactive measures to prevent cyber threats.
As cyber threats continue to evolve, companies must embrace innovative technologies like machine learning to strengthen their cybersecurity defenses. Machine learning is transforming the cybersecurity landscape with its ability to analyze large volumes of data, adapt to new threats, and detect anomalies in user behavior. By leveraging the power of AI and ML, companies can stay ahead of cyber threats and safeguard their systems and data. Embrace the future of cybersecurity with machine learning and ensure the protection of your companys digital assets.
The rest is here:
Application od Machine Learning in Cybersecurity - Read IT Quik
Big data and machine learning can usher in a new era of policymaking – Harvard Kennedy School
Posted: at 12:10 am
Q: What are the challenges to undertaking data analytical research? And where have these modes of analysis been successful?
The challenges are many, especially when you want to make a meaningful impact in one of the most complex sectorsthe health care sector. The health care sector involves a variety of stakeholders, especially in the United States, where health care is extremely decentralized yet highly regulated, for example in the areas of data collections and data use. Analytics-based solutions that can help one part of this sector might harm other parts, making finding globally optimal solutions in this sector extremely difficult. Therefore, finding data-driven approaches that can have public impact is not a walk in the park.
Then there are various challenges in implementation. In my lab, we can design advanced machine learning and AI algorithms that have outstanding performance. But if they are not implemented in practice, or if the recommendations they provide are not followed, they wont have any tangible impact.
In some of our recent experiments, for example, we found that the algorithms we had designed outperformed expert physicians in one of the leading U.S. hospitals. Interestingly, when we provided physicians with our algorithmic-based recommendations, they did not put much weight on the advice they got from the algorithms, and ignored it when treating patients, although they knew the algorithm most likely outperforms them.
We then studied ways of removing this obstacle. We found that combining human expertise with the recommendations provided by algorithms not only made it more likely for the physicians to put more weight on the algorithms advice, but also synthesized recommendations that are superior to both the best algorithms and the human experts.
We have also observed similar challenges at the policy level. For example, we have developed advanced algorithms trained on large-scale data that could help the Centers for Disease Control and Prevention improve its opioid-related policies. The opioid epidemic caused more than 556,000 deaths in the United States between 2000 and 2020, and yet the authorities still do not have a complete understanding of what can be done to effectively control this deadly epidemic. Our algorithms have produced recommendations we believe are superior to the CDCs. But, again, a significant challenge is to make sure CDC and other authorities listen to these superior recommendations.
I do not want to imply that policymakers or other authorities are always against these algorithm-driven solutionssome are more eager than othersbut I believe the helpfulness of algorithms is consistently underrated and often ignored in the practice.
Q: How do you think about the role of oversight and regulation in this field of new technologies and data analytical models?
Imposing appropriate regulations is important. There is, however, a fine line: while new tools and advancements should be guarded against misuses, the regulations should not block these tools from reaching their full potential.
As an example, in a paper that we published in the National Academy of Medicine in 2021, we discussed that the use of mobile health (mHealth) interventions (mainly enabled through advanced algorithms and smart devices) have been rapidly increasing worldwide as health care providers, industry, and governments seek more efficient ways of delivering health care. Despite the technological advances, increasingly widespread adoption, and endorsements from leading voices from the medical, government, financial, and technology sectors, these technologies have not reached their full potential.
Part of the reason is that there are scientific challenges that need to be addressed. For example, as we discuss in our paper, mHealth technologies need to make use of more advanced algorithms and statistical experimental designs in deciding how best to adapt the content and delivery timing of a treatment to the users current context.
However, various regulatory challenges remainsuch as how best to protect user data. The Food and Drug Administration in a 2019 statement encouraged the development of mobile medical apps (MMAs) that improve health care but also emphasized its public health responsibility to oversee the safety and effectiveness of medical devicesincluding mobile medical apps. Balancing between encouraging new developments and ensuring that such developments abide by the well-known principle of do no harm is not an easy regulatory task.
At the end, what is needed are two-fold: (a) advancements in the underlying science, and (b) appropriately balanced regulations. If these are met, the possibilities for using advanced analytics science methods in solving our lingering societal problems are endless.
Banner art by gremlin/Getty Images
See the article here:
Big data and machine learning can usher in a new era of policymaking - Harvard Kennedy School