IBM Extends HBCU Initiatives Through New Industry Collaborations – PRNewswire
Posted: May 9, 2021 at 1:52 am
ARMONK, N.Y., May 7, 2021 /PRNewswire/ --IBM (NYSE: IBM) announced today it has extended its IBM Global University Program with historically black colleges and universities (HBCUs) to 40 schools.
IBM is now working with the American Association of Blacks in Higher Education (AABHE), 100 Black Men of America, Inc., Advancing Minorities' Interest in Engineering (AMIE) and the United Negro College Fund (UNCF) to better prepare HBCU students for in-demand jobs in the digital economy.
In parallel, the IBM Institute for Business Value released a new reportwith broad-ranging recommendations on how businesses can cultivate more diverse, inclusive workforces by establishing similar programs and deepening engagement with HBCUs.
IBM's HBCU program momentum has been strong in an environment where only 43% of leaders across industry and academia believe higher education prepares students with necessary workforce skills.* In September 2020, IBM announced the investment of $100 million in assets, technology and resources to HBCUs across the United States. Through IBM Global University Programs, which include the continuously enhanced IBM Academic Initiative and IBM Skills Academy, IBM has now:
Building on this work, IBM and key HBCU ecosystem partners are now collaborating to expedite faculty and student access and use of IBM's industry resources.
In its new report, "Investing in Black Technical Talent: The Power of Partnering with HBCUs," IBM describes how HBCUs succeed in realizing their mission and innovate to produce an exceptional talent pipeline, despite serious funding challenges. IBM explains its approach to broad-based HBCU collaboration with a series of best-practices for industry organizations.
IBM's series of best practices include:
To download the full report, please visit: LINK.
HBCU students continue to engage with IBM on a wide range of opportunities. These include students taking artificial intelligence, cybersecurity or cloud e-learning courses and receiving a foundational industry badge certificate in four hours. Many also attend IBM's virtual student Wednesday seminars with leading experts, such as IBM neuroscientists who discuss the implications of ethics in neurotechnology.
Statements from Collaborators "HBCUs typically deliver a high return on investment. They have less money in their endowments, faculty is responsible for teaching a larger volume of classes per term and they receive less revenue per student than non-HBCUs. Yet, HBCUs produce almost a third of all African-American STEM graduates,"** said Valinda Kennedy, HBCU Program Manager, IBM Global University Programs and co-author of "Investing in Black Technical Talent: The Power of Partnering with HBCUs.""It is both a racial equity and an economic imperative for U.S. industry competitiveness to develop the most in-demand skills and jobs for all students and seek out HBCU students who are typically underrepresented in many of the most high-demand areas."
"100 Black Men of America, Inc. is proud to collaboratewith IBM to deliver these exceptional and needed resources to the HBCU community and students attending these institutions. The 100 has long supported and sought to identify mechanisms that aid in the sustainability of historically black colleges and universities. This collaboration and the access and opportunities provided by IBM will make great strides in advancing that goal," stated 100 Black Men of America Chairman Thomas W. Dortch, Jr.
"The American Association of Blacks in Higher Education is proud to collaborate with IBM," said Dereck Rovaris, President, AABHE. "Our mission to be the premier organization to drive leadership development, access and vital issues concerning Blacks in higher education works perfectly with IBM's mission to lead in the creation, development and manufacture of the industry's most advanced information technologies.Togetherthis collaboration will enhance both organizations and the many people we serve."
"IBM is a strong AMIE partnerwhose role is strategic and support is significant in developing a diverse engineering workforce through AMIE and our HBCU community.IBM's presence on AMIE's Board of Directors provides leadership for AMIE's strategies,key initiatives and programsto achieve our goal of a diverse engineering workforce," said Veronica Nelson, Executive Director, AMIE."IBM programslike the IBM Academic Initiative and the IBM Skills Academyprovideaccess, assets and opportunities for our HBCU faculty and students to gain high-demand skills in areas like AI, cybersecurity, blockchain, quantum computing and cloud computing. IBM is a key sponsor of the annual AMIE Design Challenge introducing students to new and emerging technologies through industry collaborations and providing experiential activities like IBM Enterprise Design Thinking, which is the foundational platform for the Design Challenge. The IBM Masters and PhD Fellowship Awards program supports our HBCU students with mentoring, collaboration opportunities on disruptive technologies as well as a financial award. The IBM Blue Movement HBCU Coding Boot Camp enables and recognizes programming competencies. IBM also sponsors scholarships for the students at the 15 HBCU Schools of Engineering to support their educational pursuits. IBM continues to evolve its engagement with AMIE and the HBCU Schools of Engineering."
"The IBM Skills Academy is timely in providing resources that support the creativity of my students in the Dual Degree Engineering Program at Clark Atlanta University," said Dr. Olugbemiga A. Olatidoye, Professor, Dual Degree Engineering and Director, Visualization, Stimulation and Design Laboratory, Clark Atlanta University. "It also allows my students to be skillful in their design thinking process, which resulted in an IBM digital badge certificate and a stackable credential for their future endeavors."
"We truly value the IBM skills programs and have benefitted from the Academic Initiative, Skills Academy and Global University Awards across all five campuses," saidDr. Derrick Warren, Interim Associate Dean and MBA Director, Southern University. "Over 24 faculty and staff have received instructor training and more than 300 students now have micro-certifications in AI, cloud, cybersecurity, data science, design thinking, Internet of Things, quantum computing and other offerings."
"At UNCF, we have a history of supporting HBCUs as they amplify their outsized impact on the Black community, and our work would not be possible without transformational partnerships with organizations like IBM and their IBM Global University Programs," said Ed Smith-Lewis, Executive Director of UNCF's Institute for Capacity Building. "We are excited to bring the resources of IBM to HBCUs, their faculty, and their students."
"IBM Skills Academy is an ideal platform for faculty to teach their students the latest in computing and internet technologies," said Dr. Sridhar Malkaram, West Virginia State University. "It helped the students in my Applied Data Mining course experience the state of the art in data science methods and analysis tools. The course completion badge/certificate has been an additional and useful incentive for students, which promoted their interest. The Skills Academy courses can be advantageously adapted by faculty, either as stand-alone courses or as part of existing courses."
About IBM:IBM is a leading global hybrid cloud, AI and business services provider. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. For more information visit: https://newsroom.ibm.com/home.
*King, Michael, Anthony Marshall, Dave Zaharchuk. "Pursuit of relevance: How higher education remains viable in today's dynamic world." IBM Institute for Business Value. Accessed March 23, 2021. https://www.ibm.com/thought-leadership/institute-business-value/report/education-relevance
**Source: National Center for Education Statistics, Integrated Postsecondary Education Data System
IBM Media RelationsContact: Carrie Bendzsa [emailprotected] +1613-796-3880
SOURCE IBM
Read more from the original source:
IBM Extends HBCU Initiatives Through New Industry Collaborations - PRNewswire
Here comes the worlds first ever multi-node quantum network – TelecomTV
Posted: at 1:52 am
Dutch scientists working at the quantum research institute QuTech in the city of Delft, southeast of The Hague in the Netherlands, have built the first ever multi-node quantum network by managing to connect three quantum processors. The nodes can both store and process qubits (quantum bits) and the researchers have provided a proof of concept that quantum networks are not only achievable but capable of being scaled-up in size eventually to provide humanity with a quantum Internet.
When that happens the world will become a very different place. With massive new and computing capabilities being made available via the power of sub-atomic particles, intractable problems that would currently take many years to solve (it they could be solved at all) using conventional silicon-based super-computers will be determined within seconds.
The ultimate goal is to enable the construction of a world-wide quantum Internet wherein quantum mechanics will permit quantum devices to communicate and conjoin to create large quantum clusters of exponentially great power easily capable of solving currently unsolvable problems at enormous speed.
Qubits, the basic building blocks of quantum computers exist in a quantum state where, unlike traditional binary computing where a bit represents the value of either zero or one, qubits can exist both as zeros and ones simultaneously. Thus quantum computers can perform an incredible number of calculations at once but, due to the inherent instability of the quantum state they can collapse and disappear the instant they are exposed to an outside environment and must "decide" to take the value of a zero or one. This makes for the strong possibility that qubit calculations may, or may not, be reliable and verifiable and so a great deal of research is underway on error correction systems that would guarantee the results arrived at in a quantum calculations are true.
Say hello to Bob, Alice and Charlie, just don't look at them
A quantum Internet will come into being and continue to exist because of quantum entanglement, a remarkable physical property whereby a group of particles interact or share spatial proximity such that the quantum state of each particle cannot be determined independently of the state of the others, even when the particles are physically separated by great distances.
In other words, quantum particles can be coupled into a single fundamental connection regardless of how far apart they might be. The entanglement means that a change applied to one of the particles will instantly be echoed in the other. In quantum Internet communications, entangled particles can instantly transmit information from a qubit to its entangled other even though that other is in a quantum device on the other side of the world, or the other side of the universe come to that.
For this desired state of affairs to maintain itself, entanglement must be achieved and and maintained for as long as is required. There have already been many laboratory demonstrations, commonly using fibre optics, of a physical link between two quantum devices, but two nodes do not a network make. Thats's why QuTech's achievement is so important. In a system configuration reminiscent of the role routers play in a traditional network environment, the Dutch scientists placed a third node, which has a physical connection between the two others enabling entanglement between it and them. Thus a network was born. The researchers christened the three nodes as Bob, Alice and Charlie
So, Bob has two qubits: a memory qubit to permit the storage of an established quantum link, (in this case with Alice) and a communications qubit (to permit a link with node Charlie). Once the links with Alice and Charlie are established, Bob locally connects its own to qubits with the result that an entangled three node network exists and Alice and Charlie are linked at the quantum level despite there being no physical link between them. QuTech has also invented the world's first quantum network protocol which flags up a message to the research scientists when entanglement is successfully completed.
The next step will be to add more qubits to Bob, Alice and Charlie and develop hardware, software and a full set of protocols that will form the foundation blocks of a quantum Internet. That will be laboratory work but later on the network will be tested over real-world, operational telco fibre. Research will also be conducted into creating compatibility with data structures already in use today.
Another problem to be solved is how to enable the creation of a large-scale quantum network by increasing the distance that entanglement can be maintained. Until very recently that limit was 100 kilometres but researchers in Chinese universities have just ramped it up to 1,200 kilometres.
The greater the distance of travel, the more quantum devices and intermediary nodes can be deployed and the more powerful and resilient a quantum network and Internet will become. That will enable new applications such as quantum cryptography, completely secure, utterly private and unhackable comms and cloud computing, the discovery of new drugs and other applications in fields such as finance, education, astrophysics, aeronautics, telecoms, medicine, chemistry and many others that haven't even been thought of yet.
It might even provide answers to the riddle of the universal oneness of which we are all a miniscule part. Maybe the answer to the question of life, the universe and everything will be 43, as calculated by the supercomputer Deep Thought rather than the 42 postulated by Douglas Adams in "The Hitchhikers Guide to the Galaxy". Even if that is the case, given localised quantum relativity effects and Heisenbergs Uncertainty Principle it could easily be another number, until you look at it, when it turns into a living/dead cat.
Read the original post:
Here comes the worlds first ever multi-node quantum network - TelecomTV
Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach – HPCwire
Posted: at 1:52 am
Theres no quibbling with Nvidias success. Entrenched atop the GPU market, Nvidia has ridden its own inventiveness and growing demand for accelerated computing to meet the needs of HPC and AI. Recently it embarked on an ambitious expansion by acquiring Mellanox (interconnect) and is now working to complete the purchase of Arm (processor IP). Along the way, it jumped into the systems business with its DGX line. What was mostly a GPU company is suddenly quite a bit more.
Bill Dally, chief scientist and senior vice president, research, argues that R&D has been and remains a key player in Nvidias current and long-term success. At GTC21 this spring Dally provided a glimpse into Nvidias R&D organization and a couple of high priority projects. Like Nvidia writ large, Dallys research group is expanding. It recently added a GPU storage systems effort and just started an autonomous vehicle research group, said Dally.
Presented here is a snapshot of the Nvidia R&D organization and a little about its current efforts as told by Dally plus a few of his Q&A responses at the end of the article.
[We] are loosely organized into a supply side and demand side. The supply side of the research lab tries to develop technology that goes directly to supply our product needs to make better GPUs [these are] VLSI design methodologies to architect the GPUs, better GPU architectures, better networking technology to connect CPUs together and into the larger datacenter programming systems, and we recently started a new GPU storage systems group, said Dally.
The demand side of Nvidia Research aims to drive demand for GPUs. We actually have three different graphics research groups, because one thing we have to continually do is raise the bar for what is good real-time graphics. If it ever becomes good enough, eventually, the integrated graphics that you get for free with certain CPUs will become good enough. And then therell be no demand for our discrete GPUs anymore. But by introducing ray tracing, by introducing better illumination both direct and indirect, were able to constantly raise the bar on what people demand for good real time graphics.
Not surprisingly, AI has quickly become a priority. We have actually five different AI labs because AI has become such a huge driver for demand for GPUs, he said. A couple years ago the company opened a robotics lab. We believe that Nvidia GPUs will be the brains of all future robots, and we want to lead that revolution as robots go from being very active positioning machines to being things that interact with their environments and interact with humans. Weve also just started an autonomous vehicle research group to look at technology that will lead the way for our DRIVE products.
Occasionally, said Dally, Nvidia will pull people together from the different research for what are called moonshots or high-impact projects. We did one of those that developed the TTU [tree traversal unit], what is now called the RT core, to introduce ray tracing to real-time graphics. We did one for a research GPU that later turned into Volta. [Moonshots] are typically larger projects that try to push technology further ahead, integrating concepts from many of the different disciplines, said Dally.
A clear focus on productizing R&D has consistently paid off for Nvidia contends Dally, Over the years, weve had a huge influence on Nvidia technology. Almost all of ray tracing at Nvidia started within a Nvidia Research. Starting with the development of optics and the software ray tracer that forms the core of our professional graphics offering. More recently developing the RT cores that have brought ray tracing to real time and consumer graphics. We got Nvidia into networking when we developed NVSwitch originally as a research project back in about 2012. And we got Nvidia into deep learning and AI on a collaborative project with Stanford that led to the development of cuDNN, he said.
So much for history. Today, like many others, Nvidia is investigating in optical communications technology to overcome speedbumps imposed by existing wire-based technology. Dally discussed some of Nvidias current efforts.
When we started working on NVLink and NVSwitch, it was because we had this vision that were not just building one GPU, but were building a system that incorporates many GPUs, switches and connections to the larger datacenter. To do this, we need technology that allows our GPUs to communicate with each other and other elements of the system, and this is becoming harder to do for two reasons, he said.
Slowing switching times and wiring constraints are the main culprits. For example, said Dally, using 26-gauge cable you can go at different bit rates 25, 50, 100, 200 Gbps but at 200 Gbps, youre down to one meter (reach) which is barely enough to reach a top of rack switch from a GPU; if you speed up to 400 Gbps, its going to be a half a meter.
What we want is to get as many bits per second off a millimeter chip edge as we can because if you look forward, were going to be building 100 terabit switches, and we need to get 100 terabits per second off of that switch. So wed like to be at more than a terabit per second per millimeter of chip edge and wed like to be able to reach at least 10 meters. It turns out if youre building something like a DGX SuperPod, you actually need very few cables longer than that. And wed like to have the energy per bit be down in the one picojoule per bit range. The technology that seems most promising to do this is dense wavelength division multiplexing with integrated silicon photonics.
Conceptually the idea is pretty straightforward.
This chart (below) shows the general architecture. We start with a laser comb source. This is a laser that produces a number of different colors of light. I say different colors [but they] are imperceptibly different by like 100 gigahertz in frequency, but it produces these different colors of light and sends them over a supply fiber to our transmitter. In the transmitter, we have a number of ring resonators that are able to individually modulate (on-and-off) the different colors of light. So we can take one color of light and modulate it at some bit rate on and off. We do this simultaneously in parallel on all of the other colors and get a bit rate which is a product of the number of colors we have and the bit rate were switching per color. We send that over a fiber with a reach of 10-to-100 meters to our receiving integrated circuit. [There] we pick off with ring resonators the different colors that are now either on or off with a bitstream and send that photodetectors and transimpedance amplifiers and on up to the receiver, described Dally
Dally envisions a future optical DGX where a GPU will communicate via an organic package to an electrical integrated circuit that basically takes that GPU link and modulates the individual ring resonators that you saw in the previous figure on the photonic integrated circuit. The photonic integrated circuit accepts the supply fiber from the laser, has the ring resonator modulators, and drives that fiber to the receiver. The receiver will have an NVSwitch and has the same photonic integrated circuit. But now were on the receive side where the ring resonators pick the wavelengths off to the electrical integrated circuit, and it drives the switch.
The key to this is that optical engine, he said, which has a couple of components on it. It has the host electrical interface that receives a short reach electrical interface from the GPU. It has modulator drivers to modulate the ring resonators as well as control circuitry, for example, to maintain the temperature of the ring resonators [which must be at] a very accurate temperature to keep the frequency stable. It then has waveguides to grating couplers that couple that energy into the fiber that goes to the switch.
Many electronic system and device makers are grappling with the interconnect bandwidth issue. Likely at a future GTC, one of Dallys colleagues from product management will be showcasing new optical interconnect systems while the Nvidia R&D team is grappling with some new set of projects.
I hope that the projects I described for you today [will achieve] future success, but we never know. Some of our projects become the next RT core. Some of our projects [dont work as planned, and] we quietly declare success and move on to the next one. But we are trying to do everything that we think could have impact on Nvidias future.
POST SCRIPTS Dally Quick Hits During Q&A
Nvidia R&D Reach Go Where the Talent Is
We are already geographically very, very diverse. I have a map. Of course, its not in the slide deck (shrugs), were all over North America and Europe. And a couple years ago, actually, even before the Mellanox acquisition, we opened an office in Tel Aviv. Whats driven this geographic expansion has been talent, we find smart people. And there are a lot of smart people who dont want to move to Santa Clara, California. So we basically create an office where they are. I think there are certainly some gaps. One gap I see as a big gap is an office in Asia; there are an awful lot of smart people in Asia, a lot of interesting work coming out of there. And I think Africa and South America clearly have talent pools we want to be tapping as well.
On Fab Technologys Future
So what will be the future of computing when the fab processing technology becomes near sub nanometer scaling with respect to quantum computing? Thats a good question, but I dont know that Ive given that much thought. I think weve got a couple generations to go. Amperes in seven nanometers and we see our way clearly to five nanometers and three nanometers, and the devices there operate very classically. Quantum computing, I think if we move there, its not going to be, you know, with conventional fabs. Its going to be with these Josephson junction based technologies that a lot of people are experimenting, or with photonics, or with trapped ions. We have done a study group to look at quantum computing and have seen it as a technology is pretty far out. But our strategy is to enable [quantum] by things like the recently announced cuQuantum (SDK) so that we can both help people simulate quantum algorithms until quantum computers are available, and ultimately run the classical part of those quantum computers on our GPUs.
Not Betting on Neuromorphic Tech
The next one is do you see Nvidia developing neuromorphic hardware to support spiking neural networks? The short answer is no, Ive actually spent a lot of time looking at neuromorphic computing. I spent a lot of time looking at a lot of emerging technologies and try to ask the question, Could these technologies make a difference for Nvidia? For neuromorphic computing the answer is no, and sort of consists of three things. One of them is the the spiking representation, which is actually a pretty inefficient representation of data because youre toggling a line up and down multiple times to signal a number. To have that say 256 dynamic range, on average, youd have to toggle 128 times and that [requires] probably 64 times more energy than an integer representation. Then theres the analog computation and weve looked at analog computation, finding it to be less energy efficient when you consider the need to convert to store the digital computation. And then theres different models they typically come up with. If those models were better than models, like BERT for language, or Resnet, for imaging, people would be using them, but they dont win the competitions. So were not looking at spiking things right now.
Can DL Leverage Sparsity Yes.
The next question here is can deep learning techniques leverage sparsity, for example, sparse atom optimizer, sparse attention, take advantage of the sparse matrix multiplication mechanisms in the Ampere tensor cores? Thats a bit off topic, but the short answer is yes. I mean, neural networks are fundamentally sparse. [A colleague and] I had a paper at NeurIPS in 2015, where we showed that you can basically prune most convolution layers down to 30 percent density and most fully-connected layers down to 10 percent or less density with no loss of accuracy. So I think that getting to the 50 percent you need to exploit the sparse matrix multiply units in Ampere is actually very easy. And I think were going to see, actually weve already seen that applied kind of across the board on the matrix multiply gives you a 2x improvement. But over the whole application, which includes all these things that arent matrix multiply, like the normalization step, and the nonlinear operator and the pooling, we actually even considering all of that and Amdahls law we still get a 1.5x speed up on BERT applying the sparse tensor cores.
Continue reading here:
Crystal Ball Gazing at Nvidia: R&D Chief Bill Dally Talks Targets and Approach - HPCwire
Harnessing the power of machine learning with MLOps – VentureBeat
Posted: at 1:51 am
Join Transform 2021 this July 12-16. Register for the AI event of the year.
MLOps, a compound of machine learning and information technology operations, is a newer discipline involving collaboration between data scientists and IT professionals with the aim of productizing machine learning algorithms. The market for such solutions could grow from a nascent $350 million to $4 billion by 2025, according to Cognilytica. But certain nuances can make implementing MLOps a challenge. A survey by NewVantage Partners found that only 15% of leading enterprises have deployed AI capabilities into production at any scale.
Still, the business value of MLOps cant be ignored. A robust data strategy enables enterprises to respond to changing circumstances, in part by frequently building and testing machine learning technologies and releasing them into production. MLOps essentially aims to capture and expand on previous operational practices while extending these practices to manage the unique challenges of machine learning.
MLOps, which was born at the intersection of DevOps, data engineering, and machine learning, is similar to DevOps but differs in execution. MLOps combines different skill sets: those of data scientists specializing in algorithms, mathematics, simulations, and developer tools and those of operations administrators who focus on tasks like upgrades, production deployments, resource and data management, and security.
One goal of MLOps is to roll out new models and algorithms seamlessly, without incurring downtime. Because production data can change due to unexpected events and machine learning models respond well to previously seen scenarios, frequent retraining or even continuous online training can make the difference between an optimal and suboptimal prediction.
A typical MLOps software stack might span data sources and the datasets created from them, as well as a repository of AI models tagged with their histories and attributes. Organizations with MLOps operations might also have automated pipelines that manage datasets, models, experiments, and software containers typically based on Kubernetes to make running these jobs simpler.
At Nvidia, developers running jobs on internal infrastructure must perform checks to guarantee theyre adhering to MLOps best practices. First, everything must run in a container to consolidate the libraries and runtimes necessary for AI apps. Jobs must also launch containers with an approved mechanism and run across multiple servers, as well as showing performance data to expose potential bottlenecks.
Another company embracing MLOps, software startup GreenStream, incorporates code dependency management and machine learning model testing into its development workflows. GreenStream automates model training and evaluation and leverages a consistent method of deploying and serving each model while keeping humans in the loop.
Given all the elements involved with MLOps, it isnt surprising that companies adopting it often run into roadblocks. Data scientists have to tweak various features like hyperparameters, parameters, and models while managing the codebase for reproducible results. They also need to engage in model validation, in addition to conventional code tests, including unit testing and integration testing. And they have to use a multistep pipeline to retrain and deploy a model particularly if theres a risk of reduced performance.
When formulating an MLOps strategy, it helps to begin by framing machine learning objectives from business growth objectives. These objectives, which typically come in the form of KPIs, can have certain performance measures, budgets, technical requirements, and so on. From there, organizations can work toward identifying input data and the kinds of models to use for that data. This is followed by data preparation and processing, which includes tasks like cleaning data and selecting relevant features (i.e., the variables used by the model to make predictions).
The importance of data selection and prep cant be overstated. In a recent Atlation survey, a clear majority of employees pegged data quality issues as the reason their organizations failed to successfully implement AI and machine learning. Eighty-seven percent of professionals said inherent biases in the data being used in their AI systems produce discriminatory results that create compliance risks for their organizations.
At this stage, MLOps extends to model training and experimentation. Capabilities like version control can help keep track of data and model qualities as they change throughout testing, as well as helping scale models across distributed architectures. Once machine learning pipelines are built and automated, deployment into production can proceed, followed by the monitoring, optimization, and maintenance of models.
A critical part of monitoring models is governance, which here means adding control measures to ensure the models deliver on their responsibilities.Astudy by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them and will punish those that dont. The study suggests companies that dont approach the issue thoughtfully can incur both reputational risk and a direct hit to their bottom line.
In sum, MLOps applies to the entire machine learning lifecycle, including data gathering, model creation, orchestration, deployment, health, diagnostics, governance, and business metrics. If successfully executed, MLOps can bring business interest to the fore of AI projects while allowing data scientists to work with clear direction and measurable benchmarks.
Enterprises that ignore MLOps do so at their own peril. Theres a shortage of data scientists skilled at developing apps, and its hard to keep up with evolving business objectives a challenge exacerbated by communication gaps. According to a 2019 IDC survey, skills shortages and unrealistic expectations from the C-suite are the top reasons for failure in machine learning projects. In 2018, Element AI estimated that of the 22,000 Ph.D.-educated researchers working globally on AI development and research, only 25% are well-versed enough in the technology to work with teams to take it from research to application.
Theres also the fact that models frequently drift away from what they were intended to accomplish. Assessing the risk of these failures as a part of MLOps is a key step not only for regulatory purposes, but to protect against business impacts. For example, the cost of an inaccurate video recommendation on YouTube would be much lower compared with flagging an innocent person for fraud and blocking their account or declining their loan applications.
The advantage of MLOps is that it puts operations teams at the forefront of best practices within an organization. The bottleneck that results from machine learning algorithms eases with a smarter division of expertise and collaboration from operations and data teams, and MLOps tightens that loop.
Here is the original post:
Harnessing the power of machine learning with MLOps - VentureBeat
How Machine Learning is Beneficial to the Police Departments? – CIOReview
Posted: at 1:51 am
It is important to understand the basic nature of machines like computers in order to understand what machine learning is. Computers are devices that follow instructions, and machine learning brings in an interesting outlook, where a computer can learn from the experience without the need for programming. Machine learning transports computers to another level where they can learn intuitively in a similar manner as humans. It has several applications, including virtual assistants, predictive traffic systems, surveillance systems, face recognition, spam, malware filtering, fraud detection, and so on.
The police can utilize machine learning effectively to resolve the challenges that they face. Machine learning helps in predictive policing, where they can prevent crimes and improve public safety. Here a few ways how the police can leverage machine learning to achieve better results.
Pattern recognition
One of the most robust applications of machine learning in policing is in the field of pattern recognition. Crimes can be related and might either be done by the same person or use the same modus operandi. The police can gain an advantage if they can spot the patterns in crimes. The data that the police gather from crimes is essentially unstructured. This data must be organized and sifted through to find the patterns.
Machine learning can help do achieve this easily. Machine learning tools can compare numerous crimes easily and generate a likewise score. The software can then utilize these scores to try and determine if there are common patterns. The New York Police Department is implementing this. The tool has been utilized to crack cases effectively
Cybersecurity
Cybersecurity is a vital area in todays world. With the extensive usage of the internet everywhere, cybercriminals are targeting computer systems around the globe. Cybersecurity is critical not for solving cases but to prevent them from very proactively. Cybersecurity can be enhanced with the use of machine learning. Tools that use machine learning can better cybersecurity and proactively prevent crimes.
Predictive analytics
Another area related to machine learning, which can help police is predictive analytics. This is a powerful application of machine learning that the police can leverage to achieve substantial results. A tool that has predictive analytics features utilizes machine learning to help the police in improving public safety. These tools focus on crime trends and are thus beneficial. When these trends are spotted, the law can proactively take action
Continued here:
How Machine Learning is Beneficial to the Police Departments? - CIOReview
4 Stocks to Watch Amid Rising Adoption of Machine Learning – Zacks.com
Posted: at 1:51 am
Machine learning (ML) has been gaining precedence over the past few years as organizations are rapidly implementing ML solutions to increase efficiency by delivering more accurate results as well as providing a better customer experience. Notably, when it comes to automation, ML has become a driving force as it involves training the Artificial Intelligence (AI) to learn a task and carry it out efficiently, minimizing the need for human intervention.
In any case, ML was already witnessing rapid adoption and the outbreak of the COVID-19 pandemic last year helped in accelerating that demand, as organizations began to rely heavily on automation to carry out their operations.
Markedly, ML is gradually becoming an integral part across various sectors as the trend of digitization is picking up. Notably, ML is finding application in the finance sector as among other usages, it helps in better fraud detection and enabling automated trading for investors. Meanwhile, ML is also making its way into healthcare as with the help of algorithms, big volumes of data like healthcare records can be studied to identify patterns related to diseases, thereby allowing practitioners to deliver more efficient and precise treatments.
Moreover, the retail segment has been using ML to optimize the experience of their customers by providing streamlined recommendations. Interestingly, ML also helps retailers in gauging the current market situation and determine the prices of their products accordingly, thereby increasing their competitiveness. Meanwhile, virtual voice assistants are also utilizing ML to learn from previous interactions and in turn, provide a much-improved user experience over time.
In its Top 10 Strategic Technology Trends for 2020 report, Gartner mentioned hyperautomation as one of the top-most technological trends. Notably, it involves the use of advanced technologies like AI and ML to automate processes and augment humans. This means that in tasks where hyperautomation will be implemented, the need for human involvement will gradually reduce as decision-making will increasingly become AI-driven.
Reflective of the positive developments that ML is bringing to various organizations spread across multiple sectors, the ML market looks set to grow. A report by Verified Market Research stated that the ML market is estimated to witness a CAGR of 44.9% from 2020 to 2027. Moreover, businesses are also using Machine Learning as a Service (MLaaS) models to customize their applications with the help of available ML tools. Notably, a report by Orion Market Reports stated that the MLaaS is estimated to grow at an annual average of 43% from 2021 to 2027, as mentioned in a WhaTech article.
Machine learning has been taking the world of technology by storm, allowing computers to learn by studying huge volumes of data and deliver improved results while reducing the need for human intervention. This makes it a good time then to look at companies that can make the most of this ongoing trend. Notably, we have selected four such stocks that carry a Zacks Rank #1 (Strong Buy), 2 (Buy) or 3 (Hold). You can see the complete list of todays Zacks #1 Rank stocks here.
Alphabet Inc.s (GOOGL Quick QuoteGOOGL - Free Report) Google has been using ML across various applications like YouTube, Gmail, Google Photos, Google Voice Assistant and so on, to optimize the user experience. Moreover, Googles Cloud AutoML allows developers to train high-quality models suited to their business needs. The company currently has a Zacks Rank #1. The Zacks Consensus Estimate for its current-year earnings increased 27.3% over the past 60 days. The companys expected earnings growth rate for the current year is nearly 50%.
NVIDIA Corporation (NVDA Quick QuoteNVDA - Free Report) offers ML and analytics software libraries to accelerate the ML operations of businesses. The company currently has a Zacks Rank #2. The Zacks Consensus Estimate for its current-year earnings increased 2.2% over the past 60 days. The companys expected earnings growth rate for the current year is 35.6%.
Microsoft Corporation (MSFT Quick QuoteMSFT - Free Report) provides it Azure platform for ML, allowing developers to build, train and deploy ML models. The company currently has a Zacks Rank #2. The Zacks Consensus Estimate for its current-year earnings increased 5.8% over the past 60 days. The companys expected earnings growth rate for the current year is 35.4%.
Amazon.com, Inc. (AMZN Quick QuoteAMZN - Free Report) is making use of ML models to train its virtual voice assistant Alexa. Moreover, Amazons AWS platform offers ML services to suit specific business needs. The company currently has a Zacks Rank #3. The Zacks Consensus Estimate for its current-year earnings increased 11.3% over the past 60 days. The companys expected earnings growth rate for the current year is 31.7%.
In addition to the stocks you read about above, would you like to see Zacks top picks to capitalize on the Internet of Things (IoT)? It is one of the fastest-growing technologies in history, with an estimated 77 billion devices to be connected by 2025. That works out to 127 new devices per second.
Zacks has released a special report to help you capitalize on the Internet of Thingss exponential growth. It reveals 4 under-the-radar stocks that could be some of the most profitable holdings in your portfolio in 2021 and beyond.
Click here to download this report FREE >>
Read the rest here:
4 Stocks to Watch Amid Rising Adoption of Machine Learning - Zacks.com
All The Machine Learning Libraries Open-Sourced By Facebook Ever – Analytics India Magazine
Posted: at 1:51 am
Today, corporations like Google, Facebook and Microsoft have been dominating tools and deep learning frameworks that AI researchers use globally. Many of their open-source libraries are now gaining popularity on GitHub, which is helping budding AI developers across the world build flexible and scalable machine learning models.
From conversational chatbot, self-driving cars to the weather forecast and recommendation systems, AI developers are experimenting with various neural network architectures, hyperparameters, and other features to fit the hardware constraints of edge platforms. The possibilities are endless. Some of the popular deep learning frameworks include Googles TensorFlow and Facebooks Caffe2, PyTorch, Torchcraft AI and Hydra, etc.
According to Statista, AI business operations global revenue is expected to touch $10.8 billion by 2023, and the natural language processing (NLP) market size globally is expected to reach $43.3 billion by 2025. With the rise of AI adoption across businesses, the need for open-source libraries and architecture will only increase in the coming months.
Advancing in artificial intelligence, Facebook AI Research (FAIR) at present is leading the AI race with the launch of state of the art technology tools, libraries and frameworks to bolster machine learning and AI applications across the globe.
Source: Analytics India Magazine
Here are some of the latest open-source tools, libraries and architecture developed by Facebook:
PyTorch is the most widely used deep learning framework, besides Caffe2 and Hydra, which helps researchers build flexible machine learning models.
PyTorch provides a Python package for high-level features like tensor computation (NumPy) with strong GPU acceleration and TorchScript for an easy transition between eager mode and graph mode. Its latest release provides graph-based execution, distributed training, mobile deployment and more.
Flashlight is an open-source machine learning library that lets users execute AI/ML applications using C++ API. Since it supports research in C++, Flashlight does not need external figures or bindings to perform tasks such as threading, memory mapping, or interoperating with low-level hardware. Thus, making the integration of code fast, direct and straightforward.
Opacus is an open-source high-speed library for training PyTorch models with differential privacy (DP). The library is claimed to be more scalable than existing methods. It supports training with minimal code changes and has little impact on training performance. It also allows the researchers to track the privacy budget expended at any given moment.
PyTorch3D is a highly modular and optimised library that offers efficient, reusable components for 3D computer vision research with the PyTorch framework. It is designed to integrate smoothly with deep learning methods for predicting and manipulating 3D data. As a result, the library can be implemented using PyTorch tensors, handle mini-batches of heterogeneous data, and utilise GPUs for acceleration.
Detectron2 is a next-generation library that provides detection and segmentation algorithms. It is a fusion of Detectron and maskrcnn-benchmark. Currently, it supports several computer vision research work and applications. Detection can be used on Mask R-CNN, RetinaNet, Faster R-CNN, RPN, TensorMask as well.
Detectron is an open-source software architecture that implements object detection algorithms like Mask R-CNN. The software has been written in Python and powered by the Caffe2 deep learning framework.
Detectron has enabled various research project at Facebook, including Feature pyramid networks for object detection, Mask R-CNN, non-local neural networks, detecting and recognising human-object interactions, learning to segment everything, data distillation: towards Omni-supervised learning, focal loss for dense object detection, DensePose: dense human pose estimation in the wild, and others.
Prophet is an open-source architecture released by Facebooks core data science team. It is a procedure for forecasting time series data based on an additive model where non-linear trends fit yearly, weekly, and daily seasonality, plus holiday effects. The model works best with time-series data, which has several seasons of historical data such as weather records, economic indicators and patient health evolution metrics.
The code is available on CRAN and PyPI.
Classy Vision is a new end-to-end PyTorch-based framework for large-scale training of image and video classification models. Unlike other computer vision (CV) libraries, Classy Vision claims to offer flexibility for researchers.
Typically, most CV libraries lead to duplicative efforts and require users to migrate research between frameworks and relearn the minutiae of efficient distributed training and data loading. On the other hand, Facebooks PyTorch-based CV framework claimed to offer a better solution for training at scale and deploying to production.
BoTorch is a library for Bayesian optimization built on the PyTorch framework. Bayesian optimization is a sequence design strategy for machines that do not assume any functional forms.
BoTorch seamlessly provides a modular and easily extensible interface for composing Bayesian optimization primitives such as probabilistic models, acquisition functions and optimizers and others. In addition to this, it also enables seamless integration with deep or convolutional architectures in PyTorch.
FastText is an open-source library for efficient text classification and representation learning. It works on standard and generic hardware. Machine learning models can be further reduced on mobile devices as well.
TC is a fully-functional C++ library that automatically synthesises high-performance machine learning kernels using Halide, ISL, NVRTC or LLVM. The library can be easily integrated with Caffe2 and PyTorch and has been designed to be highly portable and machine-learning framework agnostic. Also, it requires a simple tensor library with memory allocation, offloading, and synchronisation capabilities.
Read the original here:
All The Machine Learning Libraries Open-Sourced By Facebook Ever - Analytics India Magazine
AI Magic Just Removed One of the Biggest Roadblocks in Astrophysics – SciTechDaily
Posted: at 1:51 am
Using neural networks, Flatiron Institute research fellow Yin Li and his colleagues simulated vast, complex universes in a fraction of the time it takes with conventional methods.
Using a bit of machine learning magic, astrophysicists can now simulate vast, complex universes in a thousandth of the time it takes with conventional methods. The new approach will help usher in a new era in high-resolution cosmological simulations, its creators report in a study published online on May 4, 2021, in Proceedings of the National Academy of Sciences.
At the moment, constraints on computation time usually mean we cannot simulate the universe at both high resolution and large volume, says study lead author Yin Li, an astrophysicist at the Flatiron Institute in New York City. With our new technique, its possible to have both efficiently. In the future, these AI-based methods will become the norm for certain applications.
The new method developed by Li and his colleagues feeds a machine learning algorithm with models of a small region of space at both low and high resolutions. The algorithm learns how to upscale the low-res models to match the detail found in the high-res versions. Once trained, the code can take full-scale low-res models and generate super-resolution simulations containing up to 512 times as many particles.
The process is akin to taking a blurry photograph and adding the missing details back in, making it sharp and clear.
This upscaling brings significant time savings. For a region in the universe roughly 500 million light-years across containing 134 million particles, existing methods would require 560 hours to churn out a high-res simulation using a single processing core. With the new approach, the researchers need only 36 minutes.
The results were even more dramatic when more particles were added to the simulation. For a universe 1,000 times as large with 134 billion particles, the researchers new method took 16 hours on a single graphics processing unit. Existing methods would take so long that they wouldnt even be worth running without dedicated supercomputing resources, Li says.
Li is a joint research fellow at the Flatiron Institutes Center for Computational Astrophysics and the Center for Computational Mathematics. He co-authored the study with Yueying Ni, Rupert Croft and Tiziana Di Matteo of Carnegie Mellon University; Simeon Bird of the University of California, Riverside; and Yu Feng of the University of California, Berkeley.
Cosmological simulations are indispensable for astrophysics. Scientists use the simulations to predict how the universe would look in various scenarios, such as if the dark energy pulling the universe apart varied over time. Telescope observations may then confirm whether the simulations predictions match reality. Creating testable predictions requires running simulations thousands of times, so faster modeling would be a big boon for the field.
Reducing the time it takes to run cosmological simulations holds the potential of providing major advances in numerical cosmology and astrophysics, says Di Matteo. Cosmological simulations follow the history and fate of the universe, all the way to the formation of all galaxies and their black holes.
So far, the new simulations only consider dark matter and the force of gravity. While this may seem like an oversimplification, gravity is by far the universes dominant force at large scales, and dark matter makes up 85 percent of all the stuff in the cosmos. The particles in the simulation arent literal dark matter particles but are instead used as trackers to show how bits of dark matter move through the universe.
The teams code used neural networks to predict how gravity would move dark matter around over time. Such networks ingest training data and run calculations using the information. The results are then compared to the expected outcome. With further training, the networks adapt and become more accurate.
The specific approach used by the researchers, called a generative adversarial network, pits two neural networks against each other. One network takes low-resolution simulations of the universe and uses them to generate high-resolution models. The other network tries to tell those simulations apart from ones made by conventional methods. Over time, both neural networks get better and better until, ultimately, the simulation generator wins out and creates fast simulations that look just like the slow conventional ones.
We couldnt get it to work for two years, Li says, and suddenly it started working. We got beautiful results that matched what we expected. We even did some blind tests ourselves, and most of us couldnt tell which one was real and which one was fake.
Despite only being trained using small areas of space, the neural networks accurately replicated the large-scale structures that only appear in enormous simulations.
The simulations dont capture everything, though. Because they focus only on dark matter and gravity, smaller-scale phenomena such as star formation, supernovae and the effects of black holes are left out. The researchers plan to extend their methods to include the forces responsible for such phenomena, and to run their neural networks on the fly alongside conventional simulations to improve accuracy. We dont know exactly how to do that yet, but were making progress, Li says.
Reference: AI-assisted superresolution cosmological simulations by Yin Li, Yueying Ni, Rupert A. C. Croft, Tiziana Di Matteo, Simeon Bird and Yu Feng, 4 May 2021, Proceedings of the National Academy of Sciences. DOI: 10.1073/pnas.2022038118
The rest is here:
AI Magic Just Removed One of the Biggest Roadblocks in Astrophysics - SciTechDaily
AI, RPA, and Machine Learning How are they Similar & Different? – Analytics Insight
Posted: at 1:51 am
AI, RPA, and machine learning, you must have heard these words echoing in the tech industry. Be it blogs, websites, videos, or even product descriptions, disruptive technologies have made their presence bold. The fact that we all have AI-powered devices in our homes is a sign that the technology has come so far.
If you are under the impression that AI, robotic process automation, and machine learning have nothing in common, then heres what you need to know, they are all related concepts. Oftentimes, people use these names interchangeably and incorrectly which causes confusion among businesses that are looking for the latest technological solutions.
Understanding the differences between AI, ML, and RPA tools will help you identify and understand where the best opportunities are for your business to make the right technological investment.
According to IBM, Robotic process automation (RPA), also known as software robotics, uses automation technologies to mimic back-office tasks of human workers, such as extracting data, filling in forms, moving files, etc. It combines APIs and user interface (UI) interactions to integrate and perform repetitive tasks between enterprise and productivity applications. By deploying scripts which emulate human processes, RPA tools complete autonomous execution of various activities and transactions across unrelated software systems.
In that sense, RPA tools enable highly logical tasks that dont require human understanding or human interference. For example, if your work revolves around inputting account numbers on a spreadsheet to run a report with a filter category, you can use RPA to fill the numbers on the sheet. Automation will mimic your actions of setting up the filter and generate the report on its own.
With a clear set of instructions, RPA can perform any task. But theres one thing to remember, RPA systems dont have the capabilities to learn as they go. If there is a change in your task, (for example if the filter has changed in the spreadsheet report), you will have to manually input the new set of instructions.
The highest adopters of this technology are banking firms, financial services, insurance, and telecom industries. Federal agencies like NASA have also started using RPA to automate repetitive tasks.
According to Microsoft, Artificial Intelligence is the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.
In that sense, the major difference between RPA and AI is intelligence. While these technologies efficiently perform tasks, only AI can do it with similar capabilities to human intelligence.
Chatbots and virtual assistants are two popular uses of AI in the business world. In the tax industry, AI is making tax forecasting increasingly accurate with its predictive analytics capabilities. AI can also perform thorough data analysis which makes identifying tax deductions and tax credits easier than before.
According to Gartner, Advanced machine learning algorithms are composed of many technologies (such as deep learning, neural networks, and natural language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information.
Machine learning is a part of AI, so the two terms cannot be used interchangeably. And thats the difference between RPA and ML, machine learnings intelligence comes from AI but RPA lacks all intelligence.
To understand better, let us apply these technologies in a property tax scenario. First, you can create an ML model based on a hundred tax bills. The more bills you feed the model, the more accurately it will make predictions for the future bills. But if you want to use the same machine learning model to address an assessment notice, the model will be of no use. You would then have to build a new machine learning model that knows how to work with assessment notices. This is where machine learnings intelligence capabilities draw a line. Where ML fails to recognize the similarities of the document, an AI application would recognize it, thanks to its human-like interpretation skills.
The healthcare industry uses ML to accurately diagnose and treat patients, retailers use ML to make the right products available at the right stores at the right time, and pharmaceutical companies use machine learning to develop new medications. These are just a few use cases of this technology.
No, but they can work together. The combination of AI and RPA is called smart process automation, or SPA.
Also known as intelligent process automation or IPA, this duo facilitates an automated workflow with advanced capabilities than RPA using machine learning. The RPA part of the system works on doing the tasks while the machine learning part focuses on learning. In short, SPA solutions can learn to perform a specific task with the help of patterns.
The three technologies, AI, RPA, and ML, and the duet, SPA hold exciting possibilities for the future. But only when companies make the right choice, the rewards can be reaped. Now that you have an understanding of the various capabilities of these technologies, adapt and innovate.
More here:
AI, RPA, and Machine Learning How are they Similar & Different? - Analytics Insight
ARPA and Alibaba-led Group Set to Introduce The IEEE Shared Machine Learning Standard – bitcoinist.com
Posted: at 1:51 am
ARPA, a blockchain-based privacy-preserving computation network, has announced that The Institute of Electrical and Electronics Engineers P2830 standard has reached the ballot stage of the IEEE Standard Association (SA) Standard Development Process. Alibaba led the working group in which ARPA is participating, with representation from Shanghai Fudata, Baidu, Lenovo Group, Zhejiang University, Megvii Technology, and the China Electronic Standardization Institute.
With the recent Ledger hack, blockchain privacy has become a hot topic since transactions can be tied back to wealthy users. It creates security and privacy risk for individuals. Moreover, the recent surge in the DeFi coupled with the fact that the space is highly unregulated has raised serious concerns in the crypto community. As a result, investors are pouring money into blockchain-based security protocols. The retail giant, Paypal, recently acquired Curv a cryptocurrency security startup that uses multiparty computation (MPC) technology to secure its network. On the other hand, Zengo has raised $20 million in funding to further its development plans of the keyless cryptocurrency wallet.
The soaring crypto market has brought in the excessive need for multiparty computation platforms like ARPA and Zero-Knowledge Proof (ZKP) protocols to preserve the privacy and anonymity of users.
MPC technology is based on the principles of Shamirs Secret Sharing. According to these rules, a blockchain-based network breaks the private data into small pieces and then shares them among the participants without revealing the data source. MPC is leveraged by APRA to secretly share the data on its network, thereby preserving the anonymity of its users.
The IEEE is the largest technical professional organization that promotes high-quality engineering, technology, and computing information. The IEEE Standard Association (SA) is an Operating Unit within IEEE that nurtures, develops, and advances global standards in multiple industries, including IoT, AI, ML, Power and Energy, Consumer Technology, etc.
The IEEE SA P2830 standard defines an architecture for machine learning. It is referred to as training a model using encrypted data accumulated from different sources and getting it processed from a trusted third party. This standard is used by engineers and developers worldwide.
Alibaba initiated the submission for IEEE SA P2830, which was later joined by ARPA and other representatives from academia and industry. All of them formed a group together and submitted the draft copy of the standard to the association. As such, the IEEE SA develops a new standard using a standard process consisting of six stages. ARPA has now passed three stages and proved that the standard is sufficiently stable. The draft is now at Balloting the Standard step.
In order to pass, a minimum of 75% of all ballots from a balloting group must return, and all these ballots must bear a yes vote. The working group comprising Alibaba, APRA, and other contributors are now waiting for the result as ballots usually last 30 to 60 days.
Since its inception in 2018, ARPA has been developing and researching privacy-focused solutions. The platform uses Multi-Party Computation technology to separate data utility from ownership to enable data renting. In 2019, ARPA partnered with MultiVAC to enable developers to furnish mathematical guarantees of security and privacy of their dApps. A year later, its broader focus on privacy led APRA to win the 2020 Privacy-preserving Computation Emerging Power award.
In the last few months, ARPA has collaborated with industrial partners and standardization institutions to draft various privacy-preserving computation standards for multiple industries. The submission of the IEEE P2830 standard is a part of ARPAs mission of working with global companies and academies to provide the framework and practical advice to developers and architects. Moreover, it also acknowledges the projects contribution to building privacy-based frameworks.
More here: