What’s the future of AI? | McKinsey – McKinsey
Posted: May 5, 2024 at 2:42 am
Conceptual illustration of 7 glasslike panels floating over a grid. The panels transition from dark to light blue and 2 pink lines weave past the panels and pink dots float around the grid.
Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role.
0
countries
currently have national AI strategies
0
the year
AI capabilities will rival humans
$0.4 trillion
annually
gen AI could add to the global economy
Artificial intelligence is a machines ability to perform some cognitive functions we usually associate with human minds.
Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.
Artificial general intelligence (AGI) is a theoretical AI system with capabilities that rival those of a human. Many researchers believe we are still decades, if not centuries, away from achieving AGI.
Deep learning is a type of machine learning that is more capable, autonomous, and accurate than traditional machine learning.
Prompt engineering is the practice of designing inputs for AI tools that will produce optimal outputs.
Machine learning is a form of artificial intelligence that is able to learn without explicit programming by a human.
Tokenization is the process of creating a digital representation of a real thing. Tokenization can be used to protect sensitive data or to efficiently process large amounts of data.
More here:
Fighting AI Fire with ML Firepower – University of California San Diego
Posted: at 2:42 am
Zhifeng Kong, a UC San Diego computer science PhD graduate, is the first author on the story.
Modern deep generative models often produce undesirable outputs such as offensive texts, malicious images, or fabricated speech and there is no reliable way to control them. This paper is about how to prevent this from happening technically, said Zhifeng Kong, a UC San Diego Computer Science and Engineering Department PhD graduate and lead author of the paper.
The main contribution of this work is to formalize how to think about this problem and howto frame it properly so that it can be solved, said UC San Diego computer science Professor Kamalika Chaudhuri.
Traditional mitigation methods have taken one of two approaches. The first method is to re-train the model from scratch using a training set that excludes all undesirable samples; the alternative is to apply a classifier that filters undesirable outputs or edits outputs after the content has been generated.
These solutions have certain limitations for most modern, large models. Besides being cost-prohibitiverequiring millions of dollars to retrain industry scale models from scratch these mitigation methods are computationally heavy, and theres no way to control whether third parties will implement available filters or editing tools once they obtain the source code. Additionally, they might not even solve the problem: sometimes undesirable outputs, such as images with artifacts, appear even though they are not present in the training data.
Read more here:
Fighting AI Fire with ML Firepower - University of California San Diego
What Can AI Learn About the Universe? – Universe Today
Posted: at 2:42 am
Artificial intelligence and machine learning have become ubiquitous, with applications ranging from data analysis, cybersecurity, pharmaceutical development, music composition, and artistic renderings. In recent years, large language models (LLMs) have also emerged, adding human interaction and writing to the long list of applications. This includes ChatGPT, an LLM that has had a profound impact since it was introduced less than two years ago. This application has sparked considerable debate (and controversy) about AIs potential uses and implications.
Astronomy has also benefitted immensely, where machine learning is used to sort through massive volumes of data to look for signs of planetary transits, correct for atmospheric interference, and find patterns in the noise. According to an international team of astrophysicists, this may just be the beginning of what AI could do for astronomy. In a recent study, the team fine-tuned a Generative Pre-trained Transformer (GPT) model using observations of astronomical objects. In the process, they successfully demonstrated that GPT models can effectively assist with scientific research.
The study was conducted by the International Center for Relativistic Astrophysics Network (ICRANet), an international consortium made up of researchers from the International Center for Relativistic Astrophysics (ICRA), the National Institute for Astrophysics (INAF), the University of Science and Technology of China, the Chinese Academy of Sciences Institute of High Energy Physics (CAS-IHEP), the University of Padova, the Isfahan University of Technology, and the University of Ferrera. The preprint of their paper, Test of Fine-Tuning GPT by Astrophysical Data, recently appeared online.
As mentioned, astronomers rely extensively on machine learning algorithms to sort through the volumes of data obtained by modern telescopes and instruments. This practice began about a decade ago and has since grown by leaps and bounds to the point where AI has been integrated into the entire research process. As ICRA President and the studys lead author Yu Wang told Universe Today via email:
Astronomy has always been driven by data and astronomers are some of the first scientists to adopt and employ machine learning. Now, machine learning has been integrated into the entire astronomical research process, from the manufacturing and control of ground-based and space-based telescopes (e.g., optimizing the performance of adaptive optics systems, improving the initiation of specific actions (triggers) of satellites under certain conditions, etc.), to data analysis (e.g., noise reduction, data imputation, classification, simulation, etc.), and the establishment and validation of theoretical models (e.g., testing modified gravity, constraining the equation of state of neutron stars, etc.).
Data analysis remains the most common among these applications since it is the easiest area where machine learning can be integrated. Traditionally, dozens of researchers and hundreds of citizen scientists would analyze the volumes of data produced by an observation campaign. However, this is not practical in an age where modern telescopes are collecting terabytes of data daily. This includes all-sky surveys like the Very Large Array Sky Survey (VLASS) and the many phases conducted by the Sloan Digital Sky Survey (SDSS).
To date, LLMs have only been applied sporadically to astronomical research, given that they are a relatively recent creation. But according to proponents like Wang, it has had a tremendous societal impact and has a lower-limit potential equivalent to an Industrial Revolution. As for the upper limit, Wang predicts that that could range considerably and could perhaps result in humanitys enlightenment or destruction. However, unlike the Industrial Revolution, the pace of change and integration is far more rapid for AI, raising questions about how far its adoption will go.
To determine its potential for the field of astronomy, said Wang, he and his colleagues adopted a pre-trained GPT model and fine-tuned it to identify astronomical phenomena:
OpenAI provides pre-trained models, and what we did is fine-tuning, which involves altering some parameters based on the original model, allowing it to recognize astronomical data and calculate results from this data. This is somewhat like OpenAI providing us with an undergraduate student, whom we then trained to become a graduate student in astronomy.
We provided limited data with modest resolution and trained the GPT fewer times compared to normal models. Nevertheless, the outcomes are impressive, achieving an accuracy of about 90%. This high level of accuracy is attributable to the robust foundation of the GPT, which already understands data processing and possesses logical inference capabilities, as well as communication skills.
To fine-tune their model, the team introduced observations of various astronomical phenomena derived from various catalogs. This included 2000 samples of quasars, galaxies, stars, and broad absorption line (BAL) quasars from the SDSS (500 each). They also integrated observations of short and long gamma-ray bursts (GRBs), galaxies, stars, and black hole simulations. When tested, their model successfully classified different phenomena, distinguished between types of quasars, inferred their distance based on redshift, and measured the spin and inclination of black holes.
This work at least demonstrates that LLMs are capable of processing astronomical data, said Wang. Moreover, the ability of a model to handle various types of astronomical data is a capability not possessed by other specialized models.We hope that LLMs can integrate various kinds of data and then identify common underlying principles to help us understand the world. Of course, this is a challenging task and not one that astronomers can accomplish alone.
Of course, the team acknowledges that the dataset they experimented with was very small compared to the data output of modern observatories. This is particularly true of next-generation facilities like the Vera C. Rubin Observatory, which recently received its LSST camera, the largest digital camera in the world! Once Rubin is operational, it will conduct the ten-year Legacy Survey of Space and Time (LSST), which is expected to yield 15 terabytes of data per night! Satisfying the demands of future campaigns, says Wang, will require improvements and collaboration between observatories and professional AI companies.
Nevertheless, its a foregone conclusion that there will be more LLM applications for astronomy in the near future. Not only is this a likely development, but a necessary one considering the sheer volumes of data astronomical studies are generating today. And since this is likely to increase exponentially in the near future, AI will likely become indispensable to the field of study.
Further Reading: arXiv
Like Loading...
Continue reading here:
Google adds Machine Learning to power up the Chrome URL bar – Chrome Unboxed
Posted: at 2:42 am
The Chrome URL bar, also known as the Omnibox, is an absolute centerpiece of most peoples web browsing experience. Used quite literally billions billions of times a day, Chromes URL bar helps users quickly find tabs, bookmarks, revisit websites, and discover new information. With the latest release of Chrome (M124), Google has integrated machine learning (ML) models to make the Omnibox even more helpful, delivering precise and relevant web page suggestions. Soon, these same models will enhance the relevance of search suggestions too.
In a recent post on the Chromium Blog, the engineering lead for the Chrome Omnibox team shared some insider perspectives on the project. For years, the team wanted to improve the Omniboxs scoring system the mechanism that ranks suggested websites. While the Omnibox often seemed to magically know what users wanted, its underlying system was a bit rigid. Hand-crafted formulas made it difficult to improve or adapt to new usage patterns.
Machine learning promised a better way, but integrating it into such a core, heavily-used feature was obviously a complex task. The team faced numerous challenges, yet their belief in the potential benefits for users kept them driven.
Machine learning models analyze data at a scale humans simply cant. This led to some unexpected discoveries during the project. One key signal the model analyzes is the time since a user last visited a particular website. The assumption was: the more recent the visit, the more likely the user wants to go there again.
While this proved generally true, the model also detected a surprising pattern. When the time since navigation was extremely short (think seconds), the relevance score decreased. The model was essentially learning that users sometimes immediately revisit the omnibox after going to the wrong page, indicating the first suggestion wasnt what they intended. This insight, while obvious in hindsight, wasnt something the team had considered before.
With ML models now in place, Chrome can better understand user behavior and deliver increasingly tailored suggestions as time goes on for users. Google plans to explore specialized models for different use contexts, such as mobile browsing or enterprise environments, too.
Most importantly, the new system allows for constant evolution. As peoples browsing habits change, Google can retrain the models on fresh data, ensuring the Omnibox remains as helpful and intuitive as possible moving forward. Its a big step up from the earlier, rigid models used before, and it will be increasingly interesting to keep an eye on the new suggestions and tricks that well see in the Omnibox as these ML models find their stride.
Read more from the original source:
Google adds Machine Learning to power up the Chrome URL bar - Chrome Unboxed
Navigating the black box AI debate in healthcare – HealthITAnalytics.com
Posted: at 2:42 am
May 01, 2024 -Artificial intelligence (AI) is taking the healthcare industry by storm as researchers share breakthroughs and vendors rush to commercialize advanced algorithms across various use cases.
Terms like machine learning, deep learning and generative AI are becoming part of the everyday vocabulary for providers and payers exploring how these tools can help them meet their goals; however, understanding how these tools come to their conclusions remains a challenge for healthcare stakeholders.
Black box software in which an AIs decision-making process remains hidden from users is not new. In some cases, the application of these models may not be an issue, but in healthcare, where trust is paramount, black box tools could present a major hurdle for AI deployment.
Many believe that if providers cannot determine how an AI generates its outputs, they cannot determine if the model is biased or inaccurate, making them less likely to trust and accept its conclusions.
This assertion has led stakeholders to question how to build trust when adopting AI in diagnostics, medical imaging and clinical decision support. Doing so requires the healthcare industry to explore the nuances of the black box debate.
In this primer, HealthITAnalytics will outline black box AI in healthcare, alternatives to the black box approach and the current AI transparency landscape in the industry.
One of the major appeals of healthcare AI is its potential to augment clinician performance and improve care, but the black box problem significantly inhibits how well these tools can deliver on those fronts.
Research published in the February 2024 edition of Intelligent Medicine explores black box AI within the context of the do no harm principle laid out in the Hippocratic Oath. This fundamental ethical rule reflects a moral obligation clinicians undertake to prevent unnecessary harm to patients, but black box AI can present a host of harms unbeknownst to both physicians and patients.
[Black box AI] is problematic because patients, physicians, and even designers do not understand why or how a treatment recommendation is produced by AI technologies, the authors wrote, indicating that the possible harm caused by the lack of explainability in these tools is underestimated in the existing literature.
In the study, the researchers asserted that the harm resulting from medical AI's misdiagnoses may be more serious, in some cases, than that caused by human doctors misdiagnoses, noting that the unexplainability feature of such systems limits patient autonomy in shared decision-making and black box tools can create significant psychological and financial burdens for patients.
Questions of accountability and liability that come from adopting black box solutions may also hinder the proliferation of healthcare AI.
To tackle these concerns, many stakeholders across the healthcare industry are calling for the development and adoption of explainable AI algorithms.
Explainable AI (XAI) refers to a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms, according to IBM. [Explainability] is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making.
Having insights into these aspects of an AI algorithm, particularly in healthcare, can help ensure that these solutions meet the industrys standards.
Explainability can be incorporated into AI in a variety of ways, but clinicians and researchers have outlined a few critical approaches to XAI in healthcare in recent years.
A January 2023 analysis published in Sensors indicates that XAI techniques can be divided into categories based on form, interpretation type, model specificity and scope. Each methodology has pros and cons depending on the healthcare use case, but applications of these approaches have seen success in existing research.
A research team from the University of Illinois UrbanaChampaigns Beckman Institute for Advanced Science and Technology writing in IEEE Transactions on Medical Imaging demonstrated that a deep learning framework could help address the black box problem in medical imaging.
The researchers approach involved a model for identifying disease and flagging tumors in medical images like X-rays, mammograms and optical coherence tomography (OCT). From there, the tool generates a value between zero and one to denote the presence of an anomaly, which can be used in clinical decision-making.
However, alongside these values, the model also provides an equivalency map (E-map) a transformed version of the original medical image that highlights medically interesting regions of the image which helps the tool explain its reasoning and enables clinicians to check for accuracy and explain diagnostic findings to patients.
Other approaches to shed light on AIs decision-making have also been proposed.
In a December 2023 Nature Biomedical Engineering study, researchers from Stanford University and the University of Washington outlined how an auditing framework could be applied to healthcare AI tools to enhance their explainability.
The approach utilizes a combination of generative AI and human expertise to assess classifiers an algorithm used to categorize data inputs.
When applied to a set of dermatology classifiers, the framework helped researchers identify which image features had the most significant impact on the classifiers decision-making. This revealed that the tools relied on both undesirable features and features leveraged by human clinicians.
These insights could aid developers looking to determine whether an AI relies too heavily on spurious data correlations and correct those issues before deployment in a healthcare setting.
Despite these successes in XAI, there is still debate over whether these tools effectively solve the black box problem or whether black box algorithms are a problem.
While many in the healthcare industry maintain that black box algorithms are a major concern and discourage their use, some have raised questions about the nuances of these assertions. Others posit that the black box problem is an issue but indicate that XAI is not a one-size-fits-all solution.
One central talking point in these debates revolves around the use of other tools and technologies in healthcare that could be conceptualized as black box solutions.
Although [the black box AI] discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood and that the majority [of] doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography, explained experts writing in Biomedical Materials & Devices.
While not all healthcare tools are necessarily well-understood, such solutions can be contentious in evidence-based medicine, which prioritizes the use of scientific evidence, clinical expertise and patient values to guide care.
Some have suggested that the black-box problem is less of a concern for algorithms used in lower-stakes applications, such as those that arent medical and instead prioritize efficiency or betterment of operations, the authors noted.
However, AI is already being used for various tasks, including decision support and risk stratification, in clinical settings, raising questions about who is responsible in the event of a system failure or error associated with using these technologies.
Explainability has been presented as a potential method to ease concerns about responsibility, but some researchers have pointed out the limitations of XAI in recent years.
In a November 2021 viewpoint published in the Lancet Digital Health, researchers from Harvard, the Massachusetts Institute of Technology (MIT) and the University of Adelaide argued that assertions about XAIs potential to improve trust and transparency represent false hope for current explainability methods.
The research team asserted that black box approaches are unlikely to achieve these goals for patient-level decision support due to issues like interpretability gaps, which characterize an aspect of humancomputer interaction wherein a model presents its explanation, and the human user must interpret said explanation.
[This method] relies on humans to decide what a given explanation might mean. Unfortunately, the human tendency is to ascribe a positive interpretation: we assume that the feature we would find important is the one that was used, the authors explained.
This is not necessarily the case, as there can be many features some invisible to humans that a model may rely on that could lead users to form an incomplete or inaccurate interpretation.
The research team further indicated that model explanations have no performance guarantees, opening the door for other issues.
[These explanations] are only approximations to the model's decision procedure and therefore do not fully capture how the underlying model will behave. As such, using post-hoc explanations to assess the quality of model decisions adds another source of error not only can the model be right or wrong, but so can the explanation, the researchers stated.
A 2021 article published in Science echoes these sentiments, asserting that the current hype around XAI in healthcare both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
The authors underscored that for many applications in medicine, developers must use complicated machine learning models that require massive datasets with highly engineered features. In these cases, a simpler, interpretable AI (IAI) model couldnt be used as a substitute. XAI provides a secondary alternative, as these models can approach the high level of accuracy achieved by black box tools.
But here, users still face the issue of post-hoc explanations that may make them feel as though they understand the models reasoning without actually shedding light on the tools inner workings.
In light of these and other concerns, some have proposed guidelines to help healthcare stakeholders determine when it is appropriate to use black box models with explanations rather than IAI such as when there is no meaningful difference in accuracy between an interpretable model and black box AI.
The debate around the use of black box solutions and the role of XAI is not likely to be resolved soon, but understanding the nuances in these conversations is vital as stakeholders seek to navigate the rapidly evolving landscape of AI in healthcare.
Read the original post:
Navigating the black box AI debate in healthcare - HealthITAnalytics.com
Chrome’s address bar adds machine learning to deliver better suggestions – Android Authority
Posted: at 2:42 am
Edgar Cervantes / Android Authority
TL;DR
The address bar in the Chrome browser just got a big update. Google says this update should help the address bar provide web page suggestions that are more precise and relevant than before.
In a blog post, the Mountain View-based firm announced that the latest version of Chrome (M124) will bring a big improvement to the address bar, also known as the omnibox. Specifically, Google has integrated machine learning (ML) models into the omnibox, which will provide suggestions that more accurately align with what youre looking for.
As the company explains, the tool previously relied on hand-built and hand-tuned formulas to offer suggested URLs. The problem, however, is that these formulas werent flexible enough to be improved or adapt to different situations. Google says with these new ML models, it can collect fresher signals, re-train, evaluate, and deploy new models over time. Since these formulas have remained largely untouched for years, this update is kind of a big deal.
Something the ML models will be able to take into account before suggesting a web page is the time since you last visited a URL. For example, if you navigated away from a page in the last few seconds or minutes, the model will give that URL a lower relevancy score as it was likely not the site you were looking for.
Going forward, the tech giant says it plans to explore training specialized versions of the model for particular environments: for example, mobile, enterprise or academic users, or perhaps different locales.
Visit link:
Chrome's address bar adds machine learning to deliver better suggestions - Android Authority
Morning Messages from Bhagavad Gita about Creation of the World – Times Now
Posted: at 2:41 am
TN Lifestyle Desk, Times Now Digital
May 4, 2024
| ||
For Todays Quote, we have picked Chapter 16, Verse 8. This verse talks about purpose of creation of the world, and it says, asatyam apratihha te jagad hur anhvaram aparaspara-sambhta kim anyat kma-haitukam
They say, The world is without Absolute Truth, without any basis (for moral order), and without a God (who has created or is controlling it). It is created from the combination of the two has no purpose other than sexual gratification.
There are two ways of refraining from immoral behavior. The first is to refrain from unrighteousness through the exercise of willpower. The second way is to abstain from sin due to fear of God. People who can abstain from sinning merely by willpower are very few. Most people desist from doing wrong due to the fear of punishment.
The only impossible journey is the one you never begin. Tony Robbins
Send this Gita Quote to your family and friends on WhatsApp, Facebook and positivity and motivation.
Get your daily spiritual content and astrological predictions, numerology predictions, horoscopes at timesnownews.com.
Chanakya Quotes On Teachings
7 Benefits of Undertaking Char Dham Yat...
Thanks For Reading!
Excerpt from:
Morning Messages from Bhagavad Gita about Creation of the World - Times Now
Morning Messages from Bhagavad Gita about Problem with Demoniac natured people – Times Now
Posted: at 2:41 am
TN Lifestyle Desk, Times Now Digital
May 3, 2024
: | ||
For Todays Quote, we have picked Chapter 16, Verse 7. This verse talks about problem with people having demoniac qualities, and it says, pravitti cha nivitti cha jan na vidur sur na haucha npi chchro na satya tehu vidyate
Those possessing a demoniac nature do not comprehend which actions are proper and which are improper. Hence, they possess neither purity, nor good conduct, nor even truthfulness.
Dharma consists of codes of conduct that are conducive to ones purification and the general welfare of all living beings. Adharma consists of prohibited actions that lead to degradation and cause harm to society. The demoniac nature is devoid of faith in the knowledge and wisdom of the scriptures. Hence, those under its sway are confused about what is right and wrong action.
It is never too late to be what you might have been. George Eliot
Send this Gita Quote to your family and friends on WhatsApp, Facebook and positivity and motivation.
Get your daily spiritual content and astrological predictions, numerology predictions, horoscopes at timesnownews.com.
10 Wise Chanakya Niti Quotes
Swami Vivekananda's Quotes On Lord Krish...
Thanks For Reading!
Go here to see the original:
Morning Messages from Bhagavad Gita about Problem with Demoniac natured people - Times Now
Morning Messages from Bhagavad Gita about Qualities that bring Bondage or Freedom – Times Now
Posted: at 2:40 am
TN Lifestyle Desk, Times Now Digital
Apr 29, 2024
| : ||
For Todays Quote, we have picked Chapter 16, Verse 5. This verse talks about qualities that bring bondage or freedom, and it says, daiv sampad vimokhya nibandhysur mat m hucha sampada daivm abhijto si pava
The divine qualities lead to liberation, while the demoniac qualities are the cause for a continuing destiny of bondage. Grieve not, O Arjun, as you were born with saintly virtues.
Shree Krishna now explains the consequences of both. He says that the demoniac qualities keep one fettered to the samsara of life and death, while the cultivation of saintly virtues helps one break through the bondage of Maya. To tread the spiritual path successfully and pursue it till the end, a sdhak (aspirant) needs to watch out for many things. If even one of the demoniac qualities, such as arrogance, hypocrisy, etc. remains in the personality, it can become the cause of failure.
The art of life is not controlling what happens to us, but using what happens to us. Gloria Steinem
Send this Gita Quote to your family and friends on WhatsApp, Facebook and positivity and motivation.
Get your daily spiritual content and astrological predictions, numerology predictions, horoscopes at timesnownews.com.
These 6 Habits of yours Will Earn You th...
Chanakya Quotes On Leadership
Thanks For Reading!
See more here:
Morning Messages from Bhagavad Gita about Qualities that bring Bondage or Freedom - Times Now
Morning Messages from Bhagavad Gita about Features of Demoniac nature – Times Now
Posted: at 2:40 am
TN Lifestyle Desk, Times Now Digital
Apr 30, 2024
| : ||
For Todays Quote, we have picked Chapter 16, Verse 6. This verse talks about what makes for demoniac qualities, and it says, dvau bhta-sargau loke smin daiva sura eva cha daivo vistaraha prokta sura prtha me hiu
There are two kinds of beings in this worldthose endowed with a divine nature and those possessing a demoniac nature. I have described the divine qualities in detail, O Arjun. Now hear from me about the demoniac nature.
Krishna says, All souls carry their natures with them from past lives. Those who cultivated virtuous qualities and performed meritorious deeds in the past lives are the ones who are born with divine natures, while those who indulged in sin and defiled their minds in previous lives carry the same tendencies into the present one. The divine and demoniac natures are the two extremes of this spectrum, with the demoniac traits dominating in the lower abodes.
Believe in your infinite potential. Your only limitations are those you set upon yourself. Roy T. Bennett
Send this Gita Quote to your family and friends on WhatsApp, Facebook and positivity and motivation.
Get your daily spiritual content and astrological predictions, numerology predictions, horoscopes at timesnownews.com.
Lucky Signs on Palm That Could Make You ...
Which Nail Polish Colour Helps You Manif...
Thanks For Reading!
Excerpt from:
Morning Messages from Bhagavad Gita about Features of Demoniac nature - Times Now