Snap Announces new Augmented Reality and Machine Learning Tools for Brands – Branding in Asia Magazine
Posted: May 5, 2024 at 2:42 am
Snap has announced new solutions, programs, and content partnerships for advertisers to connect with Snapchats audience.
The company revealed a series of augmented reality (AR) and machine learning (ML) tools to help brands and advertisers engage users on the network with interactive experiences.
With AR Extensions, Snap said it will be enhancing the way Snapchatters experience ads, enabling advertisers to integrate AR Lenses and Filters directly into all of ad formats, including Dynamic Product Ads, Snap Ads, Collection Ads, Commercials, and Spotlight Ads.
Advertisers can showcase their products and IP and share their branded world with Snapchatters through augmented reality directly through their ads.
Snap said it has been investing in Machine Learning and automation to make AR asset creation faster and easier and the company is now able to reduce the time it takes to create AR try-on assets at scale and help brands turn 2D product catalogs into try-on experiences.
With ML Face Effects, marketers can now create branded AR ads with Generative AI technology that allows custom-produced Lenses. This enables brands to generate a unique machine-learning model quickly, create realistic face effects, and generate selfie experiences for Snapchatters.
The company has evolved its 523 creator accelerator program by partnering with award-winning actress, writer, and producer Issa Rae and her branded entertainment company Ensemble to help brands partner and produce content with 523 participants. Ensemble shares our mission to amplify the stories of creators from underrepresented communities. Together, well empower this years 523 class of storytellers while providing brands with opportunities to collaborate directly with them.
Snap has announced a number of sponsorship opportunitie. For NBCUniversals Paris 2024 Olympic Games, Snap has partnered with the company to bring its world to the summer games. For the first time, some of Snaps popular creators like Livvy Dunne, and Harry Jowsey will be in Paris to bring new perspectives, reporting from the events in their unique voices.
There will also be new AR experiences produced by Snaps in-house AR team, Arcadia, so the Snap community can immerse themselves in NBCs coverage, as well as daily shows from NBC featuring the most exciting highlights from Paris.
Snap said it is also continuing it longstanding partnerships with the NFL, NBA, and WNBA, to provide official content across Stories and Spotlight for our community.
The company is launching the Snap Sports Network, a sports channel within Snapchat that will cover unconventional sports, like dog surfing, extreme ironing, water bottle flipping, and others.
Snap Sports Network is a new kind of content program that brands can leverage through sponsorships and product integrations. The launch partners include e.l.f. and Taco Bell, said Snap
Read the original post:
Environmental Implications of the AI Boom | by Stephanie Kirmer | May, 2024 – Towards Data Science
Posted: at 2:42 am
Photo by ANGELA BENITO on Unsplash
Theres a core concept in machine learning that I often tell laypeople about to help clarify the philosophy behind what I do. That concept is the idea that the world changes around every machine learning model, often because of the model, so the world the model is trying to emulate and predict is always in the past, never the present or the future. The model is, in some ways, predicting the future thats how we often think of it but in many other ways, the model is actually attempting to bring us back to the past.
I like to talk about this because the philosophy around machine learning helps give us real perspective as machine learning practitioners as well as the users and subjects of machine learning. Regular readers will know I often say that machine learning is us meaning, we produce the data, do the training, and consume and apply the output of models. Models are trying to follow our instructions, using raw materials we have provided to them, and we have immense, nearly complete control over how that happens and what the consequences will be.
Another aspect of this concept that I find useful is the reminder that models are not isolated in the digital world, but in fact are heavily intertwined with the analog, physical world. After all, if your model isnt affecting the world around us, that sparks the question of why your model exists in the first place. If we really get down to it, the digital world is only separate from the physical world in a limited, artificial sense, that of how we as users/developers interact with it.
This last point is what I want to talk about today how does the physical world shape and inform machine learning, and how does ML/AI in turn affect the physical world? In my last article, I promised that I would talk about how the limitations of resources in the physical world intersect with machine learning and AI, and thats where were going.
This is probably obvious if you think about it for a moment. Theres a joke that goes around about how we can defeat the sentient robot overlords by just turning them off, or unplugging the computers. But jokes aside, this has a real kernel of truth. Those of us who work in machine learning and AI, and computing generally, have complete dependence for our industrys existence on natural resources, such as mined metals, electricity, and others. This has some commonalities with a piece I wrote last year about how human labor is required for machine learning to exist, but today were going to go a different direction and talk about two key areas that we ought to appreciate more as vital to our work mining/manufacturing and energy, mainly in the form of electricity.
If you go out looking for it, there is an abundance of research and journalism about both of these areas, not only in direct relation to AI, but relating to earlier technological booms such as cryptocurrency, which shares a great deal with AI in terms of its resource usage. Im going to give a general discussion of each area, with citations for further reading so that you can explore the details and get to the source of the scholarship. It is hard, however, to find research that takes into account the last 18 months boom in AI, so I expect that some of this research is underestimating the impact of the new technologies in the generative AI space.
What goes in to making a GPU chip? We know these chips are instrumental in the development of modern machine learning models, and Nvidia, the largest producer of these chips today, has ridden the crypto boom and AI craze to a place among the most valuable companies in existence. Their stock price went from the $130 a share at the start of 2021 to $877.35 a share in April 2024 as I write this sentence, giving them a reported market capitalization of over $2 trillion. In Q3 of 2023, they sold over 500,000 chips, for over $10 billion. Estimates put their total 2023 sales of H100s at 1.5 million, and 2024 is easily expected to beat that figure.
GPU chips involve a number of different specialty raw materials that are somewhat rare and hard to acquire, including tungsten, palladium, cobalt, and tantalum. Other elements might be easier to acquire but have significant health and safety risks, such as mercury and lead. Mining these elements and compounds has significant environmental impacts, including emissions and environmental damage to the areas where mining takes place. Even the best mining operations change the ecosystem in severe ways. This is in addition to the risk of what are called Conflict Minerals, or minerals that are mined in situations of human exploitation, child labor, or slavery. (Credit where it is due: Nvidia has been very vocal about avoiding use of such minerals, calling out the Democratic Republic of Congo in particular.)
In addition, after the raw materials are mined, all of these materials have to be processed extremely carefully to produce the tiny, highly powerful chips that run complex computations. Workers have to take on significant health risks when working with heavy metals like lead and mercury, as we know from industrial history over the last 150+ years. Nvidias chips are made largely in factories in Taiwan run by a company called Taiwan Semiconductor Manufacturing Company, or TSMC. Because Nvidia doesnt actually own or run factories, Nvidia is able to bypass criticism about manufacturing conditions or emissions, and data is difficult to come by. The power required to do this manufacturing is also not on Nvidias books. As an aside: TSMC has reached the maximum of their capacity and is working on increasing it. In parallel, NVIDIA is planning to begin working with Intel on manufacturing capacity in the coming year.
After a chip is produced, it can have a lifespan of usefulness that can be significant 35 years if maintained well however, Nvidia is constantly producing new, more powerful, more efficient chips (2 million a year is a lot!) so a chips lifespan may be limited by obsolescence as well as wear and tear. When a chip is no longer useful, it goes into the pipeline of what is called e-waste. Theoretically, many of the rare metals in a chip ought to have some recycling value, but as you might expect, chip recycling is a very specialized and challenging technological task, and only about 20% of all e-waste gets recycled, including much less complex things like phones and other hardware. The recycling process also requires workers to disassemble equipment, again coming into contact with the heavy metals and other elements that are involved in manufacturing to begin with.
If a chip is not recycled, on the other hand, it is likely dumped in a landfill or incinerated, leaching those heavy metals into the environment via water, air, or both. This happens in developing countries, and often directly affects areas where people reside.
Most research on the carbon footprint of machine learning, and its general environmental impact, has been in relation to power consumption, however. So lets take a look in that direction.
Once we have the hardware necessary to do the work, the elephant in the room with AI is definitely electricity consumption. Training large language models consumes extraordinary amounts of electricity, but serving and deploying LLMs and other advanced machine learning models is also an electricity sinkhole.
In the case of training, one research paper suggests that training GPT-3, with 175 billion parameters, runs around 1,300 megawatt hours (MWh) or 1,300,000 KWh of electricity. Contrast this with GPT-4, which uses 1.76 trillion parameters, and where the estimated power consumption of training was between 51,772,500 and 62,318,750 KWh of electricity. For context, an average American home uses just over 10,000 KWh per year. On the conservative end, then, training GPT-4 once could power almost 5,000 American homes for a year. (This is not considering all the power consumed by preliminary analyses or tests that almost certainly were required to prepare the data and get ready to train.)
Given that the power usage between GPT-3 and GPT-4 training went up approximately 40x, we have to be concerned about the future electrical consumption involved in next versions of these models, as well as the consumption for training models that generate video, image, or audio content.
Past the training process, which only needs to happen once in the life of a model, theres the rapidly growing electricity consumption of inference tasks, namely the cost of every time you ask Chat-GPT a question or try to generate a funny image with an AI tool. This power is absorbed by data centers where the models are running so that they can serve results around the globe. The International Energy Agency predicted that data centers alone would consume 1,000 terawatts in 2026, roughly the power usage of Japan.
Major players in the AI industry are clearly aware of the fact that this kind of growth in electricity consumption is unsustainable. Estimates are that data centers consume between .5% and 2% of all global electricity usage, and potentially could be 25% of US electricity usage by 2030.
Electrical infrastructure in the United States is not in good condition we are trying to add more renewable power to our grid, of course, but were deservedly not known as a country that manages our public infrastructure well. Texas residents in particular know the fragility of our electrical systems, but across the US climate change in the form of increased extreme weather conditions causes power outages at a growing rate.
Whether investments in electricity infrastructure have a chance of meeting the skyrocketing demand wrought by AI tools is still to be seen, and since government action is necessary to get there, its reasonable to be pessimistic.
In the meantime, even if we do manage to produce electricity at the necessary rates, until renewable and emission-free sources of electricity are scalable, were adding meaningfully to the carbon emissions output of the globe by using these AI tools. At a rough estimate of 0.86 pounds of carbon emissions per KWh of power, training GPT-4 output over 20,000 metric tons of carbon into the atmosphere. (In contrast, the average American emits 13 metric tons per year.)
As you might expect, Im not out here arguing that we should quit doing machine learning because the work consumes natural resources. I think that workers who make our lives possible deserve significant workplace safety precautions and compensation commensurate with the risk, and I think renewable sources of electricity should be a huge priority as we face down preventable, human caused climate change.
But I talk about all this because knowing how much our work depends upon the physical world, natural resources, and the earth should make us humbler and make us appreciate what we have. When you conduct training or inference, or use Chat-GPT or Dall-E, you are not the endpoint of the process. Your actions have downstream consequences, and its important to recognize that and make informed decisions accordingly. You might be renting seconds or hours of use of someone elses GPU, but that still uses power, and causes wear on that GPU that will eventually need to be disposed of. Part of being ethical world citizens is thinking about your choices and considering your effect on other people.
In addition, if you are interested in finding out more about the carbon footprint of your own modeling efforts, theres a tool for that: https://www.green-algorithms.org/
Read this article:
Environmental Implications of the AI Boom | by Stephanie Kirmer | May, 2024 - Towards Data Science
What’s the future of AI? | McKinsey – McKinsey
Posted: at 2:42 am
Conceptual illustration of 7 glasslike panels floating over a grid. The panels transition from dark to light blue and 2 pink lines weave past the panels and pink dots float around the grid.
Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role.
0
countries
currently have national AI strategies
0
the year
AI capabilities will rival humans
$0.4 trillion
annually
gen AI could add to the global economy
Artificial intelligence is a machines ability to perform some cognitive functions we usually associate with human minds.
Generative artificial intelligence (AI) describes algorithms (such as ChatGPT) that can be used to create new content, including audio, code, images, text, simulations, and videos. Recent breakthroughs in the field have the potential to drastically change the way we approach content creation.
Artificial general intelligence (AGI) is a theoretical AI system with capabilities that rival those of a human. Many researchers believe we are still decades, if not centuries, away from achieving AGI.
Deep learning is a type of machine learning that is more capable, autonomous, and accurate than traditional machine learning.
Prompt engineering is the practice of designing inputs for AI tools that will produce optimal outputs.
Machine learning is a form of artificial intelligence that is able to learn without explicit programming by a human.
Tokenization is the process of creating a digital representation of a real thing. Tokenization can be used to protect sensitive data or to efficiently process large amounts of data.
More here:
Fighting AI Fire with ML Firepower – University of California San Diego
Posted: at 2:42 am
Zhifeng Kong, a UC San Diego computer science PhD graduate, is the first author on the story.
Modern deep generative models often produce undesirable outputs such as offensive texts, malicious images, or fabricated speech and there is no reliable way to control them. This paper is about how to prevent this from happening technically, said Zhifeng Kong, a UC San Diego Computer Science and Engineering Department PhD graduate and lead author of the paper.
The main contribution of this work is to formalize how to think about this problem and howto frame it properly so that it can be solved, said UC San Diego computer science Professor Kamalika Chaudhuri.
Traditional mitigation methods have taken one of two approaches. The first method is to re-train the model from scratch using a training set that excludes all undesirable samples; the alternative is to apply a classifier that filters undesirable outputs or edits outputs after the content has been generated.
These solutions have certain limitations for most modern, large models. Besides being cost-prohibitiverequiring millions of dollars to retrain industry scale models from scratch these mitigation methods are computationally heavy, and theres no way to control whether third parties will implement available filters or editing tools once they obtain the source code. Additionally, they might not even solve the problem: sometimes undesirable outputs, such as images with artifacts, appear even though they are not present in the training data.
Read more here:
Fighting AI Fire with ML Firepower - University of California San Diego
What Can AI Learn About the Universe? – Universe Today
Posted: at 2:42 am
Artificial intelligence and machine learning have become ubiquitous, with applications ranging from data analysis, cybersecurity, pharmaceutical development, music composition, and artistic renderings. In recent years, large language models (LLMs) have also emerged, adding human interaction and writing to the long list of applications. This includes ChatGPT, an LLM that has had a profound impact since it was introduced less than two years ago. This application has sparked considerable debate (and controversy) about AIs potential uses and implications.
Astronomy has also benefitted immensely, where machine learning is used to sort through massive volumes of data to look for signs of planetary transits, correct for atmospheric interference, and find patterns in the noise. According to an international team of astrophysicists, this may just be the beginning of what AI could do for astronomy. In a recent study, the team fine-tuned a Generative Pre-trained Transformer (GPT) model using observations of astronomical objects. In the process, they successfully demonstrated that GPT models can effectively assist with scientific research.
The study was conducted by the International Center for Relativistic Astrophysics Network (ICRANet), an international consortium made up of researchers from the International Center for Relativistic Astrophysics (ICRA), the National Institute for Astrophysics (INAF), the University of Science and Technology of China, the Chinese Academy of Sciences Institute of High Energy Physics (CAS-IHEP), the University of Padova, the Isfahan University of Technology, and the University of Ferrera. The preprint of their paper, Test of Fine-Tuning GPT by Astrophysical Data, recently appeared online.
As mentioned, astronomers rely extensively on machine learning algorithms to sort through the volumes of data obtained by modern telescopes and instruments. This practice began about a decade ago and has since grown by leaps and bounds to the point where AI has been integrated into the entire research process. As ICRA President and the studys lead author Yu Wang told Universe Today via email:
Astronomy has always been driven by data and astronomers are some of the first scientists to adopt and employ machine learning. Now, machine learning has been integrated into the entire astronomical research process, from the manufacturing and control of ground-based and space-based telescopes (e.g., optimizing the performance of adaptive optics systems, improving the initiation of specific actions (triggers) of satellites under certain conditions, etc.), to data analysis (e.g., noise reduction, data imputation, classification, simulation, etc.), and the establishment and validation of theoretical models (e.g., testing modified gravity, constraining the equation of state of neutron stars, etc.).
Data analysis remains the most common among these applications since it is the easiest area where machine learning can be integrated. Traditionally, dozens of researchers and hundreds of citizen scientists would analyze the volumes of data produced by an observation campaign. However, this is not practical in an age where modern telescopes are collecting terabytes of data daily. This includes all-sky surveys like the Very Large Array Sky Survey (VLASS) and the many phases conducted by the Sloan Digital Sky Survey (SDSS).
To date, LLMs have only been applied sporadically to astronomical research, given that they are a relatively recent creation. But according to proponents like Wang, it has had a tremendous societal impact and has a lower-limit potential equivalent to an Industrial Revolution. As for the upper limit, Wang predicts that that could range considerably and could perhaps result in humanitys enlightenment or destruction. However, unlike the Industrial Revolution, the pace of change and integration is far more rapid for AI, raising questions about how far its adoption will go.
To determine its potential for the field of astronomy, said Wang, he and his colleagues adopted a pre-trained GPT model and fine-tuned it to identify astronomical phenomena:
OpenAI provides pre-trained models, and what we did is fine-tuning, which involves altering some parameters based on the original model, allowing it to recognize astronomical data and calculate results from this data. This is somewhat like OpenAI providing us with an undergraduate student, whom we then trained to become a graduate student in astronomy.
We provided limited data with modest resolution and trained the GPT fewer times compared to normal models. Nevertheless, the outcomes are impressive, achieving an accuracy of about 90%. This high level of accuracy is attributable to the robust foundation of the GPT, which already understands data processing and possesses logical inference capabilities, as well as communication skills.
To fine-tune their model, the team introduced observations of various astronomical phenomena derived from various catalogs. This included 2000 samples of quasars, galaxies, stars, and broad absorption line (BAL) quasars from the SDSS (500 each). They also integrated observations of short and long gamma-ray bursts (GRBs), galaxies, stars, and black hole simulations. When tested, their model successfully classified different phenomena, distinguished between types of quasars, inferred their distance based on redshift, and measured the spin and inclination of black holes.
This work at least demonstrates that LLMs are capable of processing astronomical data, said Wang. Moreover, the ability of a model to handle various types of astronomical data is a capability not possessed by other specialized models.We hope that LLMs can integrate various kinds of data and then identify common underlying principles to help us understand the world. Of course, this is a challenging task and not one that astronomers can accomplish alone.
Of course, the team acknowledges that the dataset they experimented with was very small compared to the data output of modern observatories. This is particularly true of next-generation facilities like the Vera C. Rubin Observatory, which recently received its LSST camera, the largest digital camera in the world! Once Rubin is operational, it will conduct the ten-year Legacy Survey of Space and Time (LSST), which is expected to yield 15 terabytes of data per night! Satisfying the demands of future campaigns, says Wang, will require improvements and collaboration between observatories and professional AI companies.
Nevertheless, its a foregone conclusion that there will be more LLM applications for astronomy in the near future. Not only is this a likely development, but a necessary one considering the sheer volumes of data astronomical studies are generating today. And since this is likely to increase exponentially in the near future, AI will likely become indispensable to the field of study.
Further Reading: arXiv
Like Loading...
Continue reading here:
Google adds Machine Learning to power up the Chrome URL bar – Chrome Unboxed
Posted: at 2:42 am
The Chrome URL bar, also known as the Omnibox, is an absolute centerpiece of most peoples web browsing experience. Used quite literally billions billions of times a day, Chromes URL bar helps users quickly find tabs, bookmarks, revisit websites, and discover new information. With the latest release of Chrome (M124), Google has integrated machine learning (ML) models to make the Omnibox even more helpful, delivering precise and relevant web page suggestions. Soon, these same models will enhance the relevance of search suggestions too.
In a recent post on the Chromium Blog, the engineering lead for the Chrome Omnibox team shared some insider perspectives on the project. For years, the team wanted to improve the Omniboxs scoring system the mechanism that ranks suggested websites. While the Omnibox often seemed to magically know what users wanted, its underlying system was a bit rigid. Hand-crafted formulas made it difficult to improve or adapt to new usage patterns.
Machine learning promised a better way, but integrating it into such a core, heavily-used feature was obviously a complex task. The team faced numerous challenges, yet their belief in the potential benefits for users kept them driven.
Machine learning models analyze data at a scale humans simply cant. This led to some unexpected discoveries during the project. One key signal the model analyzes is the time since a user last visited a particular website. The assumption was: the more recent the visit, the more likely the user wants to go there again.
While this proved generally true, the model also detected a surprising pattern. When the time since navigation was extremely short (think seconds), the relevance score decreased. The model was essentially learning that users sometimes immediately revisit the omnibox after going to the wrong page, indicating the first suggestion wasnt what they intended. This insight, while obvious in hindsight, wasnt something the team had considered before.
With ML models now in place, Chrome can better understand user behavior and deliver increasingly tailored suggestions as time goes on for users. Google plans to explore specialized models for different use contexts, such as mobile browsing or enterprise environments, too.
Most importantly, the new system allows for constant evolution. As peoples browsing habits change, Google can retrain the models on fresh data, ensuring the Omnibox remains as helpful and intuitive as possible moving forward. Its a big step up from the earlier, rigid models used before, and it will be increasingly interesting to keep an eye on the new suggestions and tricks that well see in the Omnibox as these ML models find their stride.
Read more from the original source:
Google adds Machine Learning to power up the Chrome URL bar - Chrome Unboxed
Navigating the black box AI debate in healthcare – HealthITAnalytics.com
Posted: at 2:42 am
May 01, 2024 -Artificial intelligence (AI) is taking the healthcare industry by storm as researchers share breakthroughs and vendors rush to commercialize advanced algorithms across various use cases.
Terms like machine learning, deep learning and generative AI are becoming part of the everyday vocabulary for providers and payers exploring how these tools can help them meet their goals; however, understanding how these tools come to their conclusions remains a challenge for healthcare stakeholders.
Black box software in which an AIs decision-making process remains hidden from users is not new. In some cases, the application of these models may not be an issue, but in healthcare, where trust is paramount, black box tools could present a major hurdle for AI deployment.
Many believe that if providers cannot determine how an AI generates its outputs, they cannot determine if the model is biased or inaccurate, making them less likely to trust and accept its conclusions.
This assertion has led stakeholders to question how to build trust when adopting AI in diagnostics, medical imaging and clinical decision support. Doing so requires the healthcare industry to explore the nuances of the black box debate.
In this primer, HealthITAnalytics will outline black box AI in healthcare, alternatives to the black box approach and the current AI transparency landscape in the industry.
One of the major appeals of healthcare AI is its potential to augment clinician performance and improve care, but the black box problem significantly inhibits how well these tools can deliver on those fronts.
Research published in the February 2024 edition of Intelligent Medicine explores black box AI within the context of the do no harm principle laid out in the Hippocratic Oath. This fundamental ethical rule reflects a moral obligation clinicians undertake to prevent unnecessary harm to patients, but black box AI can present a host of harms unbeknownst to both physicians and patients.
[Black box AI] is problematic because patients, physicians, and even designers do not understand why or how a treatment recommendation is produced by AI technologies, the authors wrote, indicating that the possible harm caused by the lack of explainability in these tools is underestimated in the existing literature.
In the study, the researchers asserted that the harm resulting from medical AI's misdiagnoses may be more serious, in some cases, than that caused by human doctors misdiagnoses, noting that the unexplainability feature of such systems limits patient autonomy in shared decision-making and black box tools can create significant psychological and financial burdens for patients.
Questions of accountability and liability that come from adopting black box solutions may also hinder the proliferation of healthcare AI.
To tackle these concerns, many stakeholders across the healthcare industry are calling for the development and adoption of explainable AI algorithms.
Explainable AI (XAI) refers to a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms, according to IBM. [Explainability] is used to describe an AI model, its expected impact and potential biases. It helps characterize model accuracy, fairness, transparency and outcomes in AI-powered decision making.
Having insights into these aspects of an AI algorithm, particularly in healthcare, can help ensure that these solutions meet the industrys standards.
Explainability can be incorporated into AI in a variety of ways, but clinicians and researchers have outlined a few critical approaches to XAI in healthcare in recent years.
A January 2023 analysis published in Sensors indicates that XAI techniques can be divided into categories based on form, interpretation type, model specificity and scope. Each methodology has pros and cons depending on the healthcare use case, but applications of these approaches have seen success in existing research.
A research team from the University of Illinois UrbanaChampaigns Beckman Institute for Advanced Science and Technology writing in IEEE Transactions on Medical Imaging demonstrated that a deep learning framework could help address the black box problem in medical imaging.
The researchers approach involved a model for identifying disease and flagging tumors in medical images like X-rays, mammograms and optical coherence tomography (OCT). From there, the tool generates a value between zero and one to denote the presence of an anomaly, which can be used in clinical decision-making.
However, alongside these values, the model also provides an equivalency map (E-map) a transformed version of the original medical image that highlights medically interesting regions of the image which helps the tool explain its reasoning and enables clinicians to check for accuracy and explain diagnostic findings to patients.
Other approaches to shed light on AIs decision-making have also been proposed.
In a December 2023 Nature Biomedical Engineering study, researchers from Stanford University and the University of Washington outlined how an auditing framework could be applied to healthcare AI tools to enhance their explainability.
The approach utilizes a combination of generative AI and human expertise to assess classifiers an algorithm used to categorize data inputs.
When applied to a set of dermatology classifiers, the framework helped researchers identify which image features had the most significant impact on the classifiers decision-making. This revealed that the tools relied on both undesirable features and features leveraged by human clinicians.
These insights could aid developers looking to determine whether an AI relies too heavily on spurious data correlations and correct those issues before deployment in a healthcare setting.
Despite these successes in XAI, there is still debate over whether these tools effectively solve the black box problem or whether black box algorithms are a problem.
While many in the healthcare industry maintain that black box algorithms are a major concern and discourage their use, some have raised questions about the nuances of these assertions. Others posit that the black box problem is an issue but indicate that XAI is not a one-size-fits-all solution.
One central talking point in these debates revolves around the use of other tools and technologies in healthcare that could be conceptualized as black box solutions.
Although [the black box AI] discussion is ongoing, it is worth noting that the mechanism of action of many commonly prescribed medications, such as Panadol, is poorly understood and that the majority [of] doctors have only a basic understanding of diagnostic imaging tools like magnetic resonance imaging and computed tomography, explained experts writing in Biomedical Materials & Devices.
While not all healthcare tools are necessarily well-understood, such solutions can be contentious in evidence-based medicine, which prioritizes the use of scientific evidence, clinical expertise and patient values to guide care.
Some have suggested that the black-box problem is less of a concern for algorithms used in lower-stakes applications, such as those that arent medical and instead prioritize efficiency or betterment of operations, the authors noted.
However, AI is already being used for various tasks, including decision support and risk stratification, in clinical settings, raising questions about who is responsible in the event of a system failure or error associated with using these technologies.
Explainability has been presented as a potential method to ease concerns about responsibility, but some researchers have pointed out the limitations of XAI in recent years.
In a November 2021 viewpoint published in the Lancet Digital Health, researchers from Harvard, the Massachusetts Institute of Technology (MIT) and the University of Adelaide argued that assertions about XAIs potential to improve trust and transparency represent false hope for current explainability methods.
The research team asserted that black box approaches are unlikely to achieve these goals for patient-level decision support due to issues like interpretability gaps, which characterize an aspect of humancomputer interaction wherein a model presents its explanation, and the human user must interpret said explanation.
[This method] relies on humans to decide what a given explanation might mean. Unfortunately, the human tendency is to ascribe a positive interpretation: we assume that the feature we would find important is the one that was used, the authors explained.
This is not necessarily the case, as there can be many features some invisible to humans that a model may rely on that could lead users to form an incomplete or inaccurate interpretation.
The research team further indicated that model explanations have no performance guarantees, opening the door for other issues.
[These explanations] are only approximations to the model's decision procedure and therefore do not fully capture how the underlying model will behave. As such, using post-hoc explanations to assess the quality of model decisions adds another source of error not only can the model be right or wrong, but so can the explanation, the researchers stated.
A 2021 article published in Science echoes these sentiments, asserting that the current hype around XAI in healthcare both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.
The authors underscored that for many applications in medicine, developers must use complicated machine learning models that require massive datasets with highly engineered features. In these cases, a simpler, interpretable AI (IAI) model couldnt be used as a substitute. XAI provides a secondary alternative, as these models can approach the high level of accuracy achieved by black box tools.
But here, users still face the issue of post-hoc explanations that may make them feel as though they understand the models reasoning without actually shedding light on the tools inner workings.
In light of these and other concerns, some have proposed guidelines to help healthcare stakeholders determine when it is appropriate to use black box models with explanations rather than IAI such as when there is no meaningful difference in accuracy between an interpretable model and black box AI.
The debate around the use of black box solutions and the role of XAI is not likely to be resolved soon, but understanding the nuances in these conversations is vital as stakeholders seek to navigate the rapidly evolving landscape of AI in healthcare.
Read the original post:
Navigating the black box AI debate in healthcare - HealthITAnalytics.com
Chrome’s address bar adds machine learning to deliver better suggestions – Android Authority
Posted: at 2:42 am
Edgar Cervantes / Android Authority
TL;DR
The address bar in the Chrome browser just got a big update. Google says this update should help the address bar provide web page suggestions that are more precise and relevant than before.
In a blog post, the Mountain View-based firm announced that the latest version of Chrome (M124) will bring a big improvement to the address bar, also known as the omnibox. Specifically, Google has integrated machine learning (ML) models into the omnibox, which will provide suggestions that more accurately align with what youre looking for.
As the company explains, the tool previously relied on hand-built and hand-tuned formulas to offer suggested URLs. The problem, however, is that these formulas werent flexible enough to be improved or adapt to different situations. Google says with these new ML models, it can collect fresher signals, re-train, evaluate, and deploy new models over time. Since these formulas have remained largely untouched for years, this update is kind of a big deal.
Something the ML models will be able to take into account before suggesting a web page is the time since you last visited a URL. For example, if you navigated away from a page in the last few seconds or minutes, the model will give that URL a lower relevancy score as it was likely not the site you were looking for.
Going forward, the tech giant says it plans to explore training specialized versions of the model for particular environments: for example, mobile, enterprise or academic users, or perhaps different locales.
Visit link:
Chrome's address bar adds machine learning to deliver better suggestions - Android Authority
Top 9 Hindu Baby Boy Names Inspired By Bhagavad Gita – TheHealthSite
Posted: at 2:41 am
Healthy Diet: 6 Foods That Help Kids Sleep Better
Incorporating certain foods into a child's diet, such as dairy products, bananas, whole grains, cherries, leafy green vegetables, and poultry, may help promote better sleep due to their nutritional properties.
Continued here:
Top 9 Hindu Baby Boy Names Inspired By Bhagavad Gita - TheHealthSite
Desires got you flowing? Bhagavad Gita’s secret to stillness in Verse 70 of Chapter 2 – The Times of India
Posted: at 2:41 am
Feeling constantly pulled in different directions by your desires? Chapter 2, Verse 70 of the Bhagavad Gita offers a profound solution. This verse compares the sage, unfazed by worldly temptations, to the calm ocean amidst flowing rivers. Join us as we unpack the meaning of this verse and explore how to cultivate inner peace by letting go of desires. Learn how to achieve a state of tranquility, unshaken by the constant flow of the world around you.
Follow this link: