Archive for the ‘Quantum Computer’ Category
Q-CTRL to Host Live Demos of ‘Quantum Control’ Tools – Quantaneo, the Quantum Computing Source
Posted: April 2, 2020 at 7:49 am
Q-CTRL, a startup that applies the principles of control engineering to accelerate the development of the first useful quantum computers, will host a series of online demonstrations of new quantum control tools designed to enhance the efficiency and stability of quantum computing hardware.
Dr. Michael Hush, Head of Quantum Science and Engineering at Q-CTRL, will provide an overview of the companys cloud-based quantum control engineering software called BOULDER OPAL. This software uses custom machine learning algorithms to create error-robust logical operations in quantum computers. The team will demonstrate - using real quantum computing hardware in real time - how they reduce susceptibility to error by 100X and improve hardware stability in time by 10X, while reducing time-to-solution by 10X against existing software.
Scheduled to accommodate the global quantum computing research base, the demonstrations will take place:
April 16 from 4-4:30 p.m. U.S. Eastern Time (ET) April 21 from 10-10:30 a.m. Singapore Time (SGT) April 23 from 10-10:30 a.m. Central European Summer Time (CEST) To register, visit https://go.q-ctrl.com/l/791783/2020-03-19/dk83
Released in Beta by Q-CTRL in March, BOULDER OPAL is an advanced Python-based toolkit for developers and R&D teams using quantum control in their hardware or theoretical research. Technology agnostic and with major computational grunt delivered seamlessly via the cloud, BOULDER OPAL enables a range of essential tasks which improve the performance of quantum computing and quantum sensing hardware. This includes the efficient identification of sources of noise and error, calculating detailed error budgets in real lab environments, creating new error-robust logic operations for even the most complex quantum circuits, and integrating outputs directly into real hardware.
The result for users is greater performance from todays quantum computing hardware, without the need to become an expert in quantum control engineering.
Experimental validations and an overview of the software architecture, developed in collaboration with the University of Sydney, were recently released in an online technical manuscript titled Software Tools for Quantum Control: Improving Quantum Computer Performance through Noise and Error Suppression.
See the original post here:
Q-CTRL to Host Live Demos of 'Quantum Control' Tools - Quantaneo, the Quantum Computing Source
Disrupt The Datacenter With Orchestration – The Next Platform
Posted: at 7:49 am
Since 1965, the computer industry has relied on Moores Law to accelerate innovation, pushing more transistors into integrated circuits to improve computation performance. Making transistors smaller helped lift all boats for the entire industry and enable new applications. At some point, we will reach a physical limit that is, a limit stemming from physics itself. Even with this setback, improvements kept on pace thanks to increased parallelism of computation and consolidation of specialized functions into single chip packages, such as systems on chip).
In recent years, we are nearing another peak. This article proposes to improve computation performance not only by building better hardware, but by changing how we use existing hardware. More specifically, the focusing on how we use existing processor types. I call this approach Compute Orchestration: automatic optimization of machine code to best use the modern datacenter hardware (again, with special emphasis on different processor types).
So what is compute orchestration? It is the embracing of hardware diversity to support software.
There are many types of processors: Microprocessors in small devices, general purpose CPUs in computers and servers, GPUs for graphics and compute, and programmable hardware like FPGAs. In recent years, specialized processors like TPUs and neuromorphic processors for machine learning are rapidly entering the datacenter.
There is potential in this variety: Instead of statically utilizing each processor for pre-defined functions, we can use existing processors as a swarm, each processor working on the most suitable workloads. Doing that, we can potentially deliver more computation bandwidth with less power, lower latency and lower total cost of ownership).
Non-standard utilization of existing processors is already happening: GPUs, for example, were already adapted from processors dedicated to graphics into a core enterprise component. Today, GPUs are used for machine learning and cryptocurrency mining, for example.
I call the technology to utilize the processors as a swarm Compute Orchestration. Its tenets can be described in four simple bullets:
Compute orchestration is, in short, automatic adaptation of binary code and automatic allocation to the most suitable processor types available. I split the evolution of compute orchestration into four generations:
Compute Orchestration Gen 1: Static Allocation To Specialized Co-Processors
This type of compute orchestration is everywhere. Most devices today include co-processors to offload some specialized work from the CPU. Usually, the toolchain or runtime environment takes care of assigning workloads to the co-processor. This is seamless to the developer, but also limited in functionality.
Best known example is the use of cryptographic co-processors for relevant functions. Being liberal in our definitions of co-processor, Memory Management Units (MMUs) to manage virtual memory address translation can also be considered an example.
Compute Orchestration Gen 2: Static Allocation, Heterogeneous Hardware
This is where we are at now. In the second generation, the software relies on libraries, dedicated run time environments and VMs to best use the available hardware. Lets call the collection of components that help better use the hardware frameworks. Current frameworks implement specific code to better use specific processors. Most prevalent are frameworks that know how to utilize GPUs in the cloud. Usually, better allocation to bare metal hosts remains the responsibility of the developer. For example, the developer/DevOps engineer needs to make sure a machine with GPU is available for the relevant microservice. This phenomenon is what brought me to think of Compute Orchestration in the first place, as it proves there is more slack in our current hardware.
Common frameworks like OpenCL allow programming compute kernels to run on different processors. TensorFlow allows assigning nodes in a computation graph to different processors (devices).
This better use of hardware by using existing frameworks is great. However, I believe there is a bigger edge. Existing frameworks still require effort from the developer to be optimal they rely on the developer. Also, no legacy code from 2016 (for example) is ever going to utilize a modern datacenter GPU cluster. My view is that by developing automated and dynamic frameworks, that adapt to the hardware and workload, we can achieve another leap.
Compute Orchestration Gen 3: Dynamic Allocation To Heterogeneous Hardware
Computation can take an example from the storage industry: Products for better utilization and reliability of storage hardware have innovated for years. Storage startups develop abstraction layers and special filesystems that improve efficiency and reliability of existing storage hardware. Computation, on the other hand, remains a stupid allocation of hardware resources. Smart allocation of computation workloads to hardware could result in better performance and efficiency for big data centers (for example hyperscalers like cloud providers). The infrastructure for such allocation is here, with current data center designs pushing to more resource disaggregation, introduction of diverse accelerators, and increased work on automatic acceleration (for example: Workload-aware Automatic Parallelization for Multi-GPU DNN Training).
For high level resource management, we already have automatic allocation. For example, project Mesos (paper) focusing on fine-grained resource sharing, Slurm for cluster management, and several extensions using Kubernetes operators.
To further advance from here would require two steps: automatic mapping of available processors (which we call the compute environment) and workload adaptation. Imagine a situation where the developer doesnt have to optimize her code to the hardware. Rather, the runtime environment identifies the available processing hardware and automatically optimizes the code. Cloud environments are heterogeneous and changing, and the code should change accordingly (in fact its not the code, but the execution model in the run time environment of the machine code).
Compute Orchestration Gen 4: Automatic Allocation To Dynamic Hardware
A thought, even a possibility, can shatter and transform us. Friedrich Wilhelm Nietzsche
The quote above is to say that there we are far from practical implementation of the concept described here (as far as I know). We can, however, imagine a technology that dynamically re-designs a data center to serve needs of running applications. This change in the way whole data centers meet computation needs as already started. FGPAs are used more often and appear in new places (FPGAs in hosts, FPGA machines in AWS, SmartNICs), providing the framework for constant reconfiguration of hardware.
To illustrate the idea, I will use an example: Microsoft initiated project Catapult, augmenting CPUs with an interconnected and configurable compute layer composed of programmable silicon. The timeline in the projects website is fascinating. The project started off in 2010, aiming to improve search queries by using FPGAs. Quickly, it proposed the use of FPGAs as bumps in the wire, adding computation in new areas of the data path. Project Catapult also designed an architecture for using FPGAs as a distributed resource pool serving all the data center. Then, the project spun off Project BrainWave, utilizing FPGAs for accelerating AI/ML workloads.
This was just an example of innovation in how we compute. Quick online search will bring up several academic works on the topic. All we need to reach the 4th generation is some idea synthesis, combining a few concepts together:
Low effort HDL generation (for example Merlin compiler, BORPH)
In essence, what I am proposing is to optimize computation by adding an abstraction layer that:
Automatic allocation on agile hardware is the recipe for best utilizing existing resources: faster, greener, cheaper.
The trends and ideas mentioned in this article can lead to many places. It is very likely, that we are already working with existing hardware in the optimal way. It is my belief that we are in the midst of the improvement curve. In recent years, we had increased innovation in basic hardware building blocks, new processors for example, but we still have room to improve in overall allocation and utilization. The more we deploy new processors in the field, the more slack we have in our hardware stack. New concepts, like edge computing and resource disaggregation, bring new opportunities for optimizing legacy code by smarter execution. To achieve that, legacy code cant be expected to be refactored. Developers and DevOps engineers cant be expected to optimize for the cloud configuration. We just need to execute code in a smarter way and that is the essence of compute orchestration.
The conceptual framework described in this article should be further explored. We first need to find the killer app (what type of software we optimize to which type of hardware). From there, we can generalize. I was recently asked in a round table what is the next generation of computation? Quantum computing? Tensor Processor Units? I responded that all of the above, but what we really need is better usage of the existing generation.
Guy Harpak is the head of technology at Mercedes-Benz Research & Devcelopment in its Tel Aviv, Israel facility. Please feel free to contact him on any thoughts on the topics above at harpakguy@gmail.com. Harpak notes that this contributed article reflects his personal opinion and is in no way related to people or companies that he works with or for.
Related Reading: If you find this article interesting, I would recommend researching the following topics:
Some interesting articles on similar topics:
Return Of The Runtimes: Rethinking The Language Runtime System For The Cloud 3.0 Era
The Deep Learning Revolution And Its Implications For Computer Architecture And Chip Design (by Jeffrey Dean from Google Research)
Beyond SmartNICs: Towards A Fully Programmable Cloud
Hyperscale Cloud: Reimagining Datacenters From Hardware To Applications
Read more from the original source:
Disrupt The Datacenter With Orchestration - The Next Platform
Quantum Computing: Will It Actually Produce Jobs? – Dice Insights
Posted: March 19, 2020 at 1:52 pm
If youre interested in tech, youve likely heard about the race to develop quantum computers. These systems compute via qubits, which exist not only as ones and zeros (as you find in traditional processors) but also in an in-between state known as superposition.
For tasks such as cryptography, qubits and superposition would allow a quantum computer to analyze every potential solution simultaneously, making such systems much faster than conventional computers. Microsoft, Google, IBM, and other firms are all throwing tons of resources into quantum-computing research, hoping for a breakthrough that will make them a leader in this nascent industry.
Questions abound about quantum computing, including whether these systems will actually produce the answers that companies really need. For those in the tech industry, theres a related interest in whether quantum computing will actually produce jobs at scale.
The large tech companies and research laboratories who are leading the charge on R&D in the pure quantum computing hardware space are looking for people with advanced degrees in key STEM fields like physics, math and engineering, said John Prisco, President & CEO of Quantum Xchange, which markets a quantum-safe key distribution that supposedly will bridge the gap between traditional encryption solutions and quantum computing-driven security. This is in large part because there are few programs today that actually offer degrees or specializations in quantum technology.
When Prisco was in graduate school, he added, There were four of us in the electrical engineering program with the kind of physics training this field calls for. More recently, Ive recently seen universities like MIT and Columbia investing in offering this training to current students, but its going to take awhile to produce experts.
Theres every chance that increased demand for quantum-skilled technologists could drive even more universities to spin up the right kind of training and education programs. The National Institute of Standards and Technology (NIST) is evaluating post-quantum cryptography that would replace existing methods, including public-key RSA encryption methods. Time is of the essence when it comes to governments and companies coming up with these post-quantum algorithms; the next evolutions in cryptography will render the current generation pretty much obsolete.
Combine that quest with the current shortage of trained cybersecurity professionals, and you start to see where the talent and education crunch will hit over the next several years. While hackers weaponizing quantum computers themselves is still a far off proposal, the threat of harvesting attacks, where nefarious actors steal encrypted data now to decrypt later once quantum computers are available, is already here, Prisco said, pointing at Chinas 2015 hack of the U.S. Office of Personnel Management, which saw the theft of 21 million government employee records.
Though that stolen data was encrypted and there is no evidence it has been misused to date, the Chinese government is likely sitting on that trove, waiting for the day they have a quantum computer powerful enough to crack public key encryption, he said. Organizations that store sensitive data with a long shelf-life need to start preparing now. There is no time to waste.
But what will make a good quantum technologist?
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
Herman Collins, CEO of StrategicQC, a recruiting agency for the quantum-computing ecosystem, believes that sourcing quantum-related talent at this stage comes down to credentials. Because advanced quantum expertise is rare, the biggest sign that a candidate is qualified is whether they have a degree in one of the fields of study that relates to quantum computing, he said. I would say that degrees, particularly advanced degrees, such as quantum physics obviously, physics theory, math or computer science are a good start. A focus on machine learning or artificial intelligence would be excellent as part of an augmented dynamic quantum skill set.
Although Google, IBM, and the U.S. government have infinite amounts of money to throw at talent, smaller companies are occasionally posting jobs for quantum-computing talent. Collins thinks that, despite the relative lack of resources, these small companies have at least a few advantages when it comes to attracting the right kind of very highly specialized talent.
Smaller firms and startups can often speak about the ability to do interesting work that will impact generations to come and perhaps some equity participation, he said. Likewise, some applicants may be interested in working with smaller firms to build quantum-related technology from the ground up. Others might prefer a more close-knit team environment that smaller firms may offer.
Some 20 percent of the quantum-related positions, Collins continued, are in marketing, sales, management, tech support, and operations. Even if you havent spent years studying quantum computing, in other words, you can still potentially land a job at a quantum-computing firm, doing all the things necessary to ensure that the overall tech stack keeps operating.
It is equally important for companies in industries where quantum can have impactful results in the nearer term begin to recruit and staff quantum expertise now, Collins said. Companies competing in financial services, aerospace, defense, healthcare, telecommunications, energy, transportation, agriculture and others should recognize the vital importance of looking very closely at quantum and adding some skilled in-house capability.
Given the amount of money and research-hours already invested in quantum computing, as well as some recent (and somewhat controversial) breakthroughs, theres every chance the tech industry could see an uptick in demand for jobs related to quantum computing. Even for those who dont plan on specializing in this esoteric field, there may be opportunities to contribute.
Here is the original post:
Quantum Computing: Will It Actually Produce Jobs? - Dice Insights
Quantum computing is right around the corner, but cooling is a problem. What are the options? – Diginomica
Posted: at 1:52 pm
(Shutterstock.com)
Why would you be thinking about quantum computing? Yes, it may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead. I'll get into those use cases, but first - Lets start with the basics:
Classical computers require built-in fans and other ways to dissipate heat, and quantum computers are no different. Instead of working with bits of information that can be either 0 or 1, as in a classical machine, a quantum computer relies on "qubits," which can be in both states simultaneously called a superposition thanks to the quirks of quantum mechanics. Those qubits must be shielded from all external noise, since the slightest interference will destroy the superposition, resulting in calculation errors. Well-isolated qubits heat up quickly, so keeping them cool is a challenge.
The current operating temperature of quantum computers is 0.015 Kelvin or -273C or -460F. That is the only way to slow down the movement of atoms, so a "qubit" can hold a value.
There have been some creative solutions proposed for this problem, such as the nanofridge," which builds a circuit with an energy gap dividing two channels: a superconducting fast lane, where electrons can zip along with zero resistance, and a slow resistive (non-superconducting) lane. Only electrons with sufficient energy to jump across that gap can get to the superconductor highway; the rest are stuck in the slow lane. This has a cooling effect.
Just one problem though: The inventor, MikkoMttnen, is confident enough in the eventual success that he has applied for a patent for the device. However, "Maybe in 10 to 15 years, this might be commercially useful, he said. Its going to take some time, but Im pretty sure well get there."
Ten to fifteen years? It may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead in the following sectors:
An excellent, detailed report on the quantum computing ecosystem is: The Next Decade in Quantum Computingand How to Play.
But the cooling problem must get sorted. It may be diamonds that finally solve some of the commercial and operational/cost issues in quantum computing: synthetic, also known as lab-grown diamonds.
The first synthetic diamond was grown by GE in 1954. It was an ugly little brown thing. By the '70s, GE and others were growing up to 1-carat off-color diamonds for industrial use. By the '90s, a company called Gemesis (renamed Pure Grown Diamonds) successfully created one-carat flawless diamonds graded ILA, meaning perfect. Today designer diamonds come in all sizes and colors: adding Boron to make them pink or nitrogen to make them yellow.
Diamonds have unique properties. They have high thermal conductivity (meaning they don't melt like silicon). The thermal conductivity of a pure diamond is the highest of any known solid. They are also an excellent electrical insulator. In its center, it has an impurity called an N-V center, where a carbon atom is replaced by a nitrogen atom leaving a gap where an unpaired electron circles the nitrogen gap and can be excited or polarized by a laser. When excited, the electron gives off a single photon leaving it in a reduced energy state. Somehow, and I admit I dont completely understand this, the particle is placed into a quantum superposition. In quantum-speak, that means it can be two things, two values, two places at once, where it has both spin up and spin down. That is the essence of quantum computing, the creation of a "qubit," something that can be both 0 and 1 at the same time.
If that isnt weird enough, there is the issue of entanglement. A microwave pulse can be directed at a pair of qubits, placing them both in the same state. But you can "entangle" them so that they are always in the same state. In other words, if you change the state of one of them, the other also changes, even if great distances separate them, a phenomenon Einstein dubbed, spooky action at a distance. Entangled photons don't need bulky equipment to keep them in their quantum state, and they can transmit quantum information across long distances.
At least in the theory of the predictive nature of entanglement, adding qubits explodes a quantum computer's computing power. In telecommunications, for example, entangled photons that span the traditional telecommunications spectrum have enormous potential for multi-channel quantum communication.
News Flash: Physicists have just demonstrated a 3-particle entanglement. This increases the capacity of quantum computing geometrically.
The cooling of qubits is the stumbling block. Diamonds seem to offer a solution, one that could quantum computing into the mainstream. The impurities in synthetic diamonds can be manipulated, and the state of od qubit can held at room temperature, unlike other potential quantum computing systems, and NV-center qubits (described above) are long-lived. There are still many issues to unravel to make quantum computers feasible, but today, unless you have a refrigerator at home that can operate at near absolute-zero, hang on to that laptop.
But doesnt diamonds in computers sound expensive, flagrant, excessive? It begs the question, What is anything worth? Synthetic diamonds for jewelry are not as expensive as mined gems, but the price one pays at retail s burdened by the effect of monopoly, and so many intermediaries, distributors, jewelry companies, and retailers.
A recent book explored the value of fine things and explains the perceived value which only has a psychological basis.In the 1930s, De Beers, which had a monopoly on the world diamond market and too many for the weak demand, engaged the N. W. Ayers advertising agency realizing that diamonds were only sold to the very rich, while everyone else was buying cars and appliances. They created a market for diamond engagement rings and introduced the idea that a man should spend at least three months salary on a diamond for his betrothed.
And in classic selling of an idea, not a brand, they used their earworm taglines like diamonds are forever. These four iconic words have appeared in every single De Beers advertisement since 1948, and AdAge named it the #1 slogan of the century in 1999. Incidentally, diamonds arent forever. That diamond on your finger is slowly evaporating.
The worldwide outrage over the Blood Diamond scandal is increasing supply and demand for fine jewelry applications of synthetic diamonds. If quantum computers take off, and a diamond-based architecture becomes a standard, it will spawn a synthetic diamond production boom, increasing supply and drastically lowering the cost, making it feasible.
Many thanks to my daughter, Aja Raden, an author, jeweler, and behavioral economist for her insights about the diamond trade.
Here is the original post:
Quantum Computing for Everyone – The Startup – Medium
Posted: at 1:52 pm
Qubits are exponentially faster than bits in several computing problems, such as database searches and factoring (which, as we will discuss soon, may break your Internet encryption).
An important thing to realize is that qubits can hold much more information than a bit can. One bit holds the same amount of information as one qubit they can both only hold one value. However, four bits must be used to store the same amount of information as two qubits. A two-qubit system in equal superposition holds values for four states, which on a classical computer, would need at least four bits to hold. Eight bits are needed to store the same amount of information as three qubits, since a three-qubit system can store eight states 000, 001, 010, 011, 100, 101, 110, and 111. This pattern continues.
The below graph provides a visual for the computing power of qubits. The x-axis represents the number of qubits used to hold a certain amount of information. The blue lines y represents the number of bits needed to hold the same amount of information as the number of qubits (x-axis), or 2 to the power of x. The red lines y represents the number of qubits needed to hold the same amount of information as the number of qubits in the x-axis (y=x).
Imagine the exponential speedup quantum computing can provide! A gigabyte (8E+09 bits) worth of information can be represented with log(8E+09)/log(2) = 33 (rounded up from 32.9) qubits.
Quantum computers are also great at factoring numbers which leads us to RSA encryption. The security protocol that secures Medium and probably any other website youve been on is known as RSA encryption. It relies on the fact that with current computing resources, it would take a very, very long time to factor a 30+-digit number m that has only one solution namely, p times q, where both p and q are large prime numbers. However, dividing m by p or q is computationally much easier, and since m divided by q returns p and vice versa, it provides a quick key verification system.
A quantum algorithm called Shors algorithm has shown exponential speedup in factoring numbers, which could one day break RSA encryption. But dont buy into the hype yet as of this writing, the largest number factored by quantum computers is 21 (into 3 and 7). The hardware has not been developed yet for quantum computers to factor 30-digit numbers or even 10-digit numbers. Even if quantum computers one day do break RSA encryption, a new security protocol called BB84 that relies on quantum properties is verified safe from quantum computers.
So will quantum computers ever completely replace the classical PC? Not in the forseeable future.
Quantum computing, while developing very rapidly, is still in an infantile stage, with research only being conducted semi-competitively by large corporations like Google, Microsoft, and IBM. Much of the hardware to accelerate quantum computing is not currently available. There are several obstacles to a quantum future, of which a major one is addressing gate errors and maintaining integrity of a qubits state.
However, given the amount of innovation that has happened in the past few years, it seems inevitable during our lifetimes that quantum computing will make huge strides. In addition, complexity theory has shown that there are several cases where classical computers perform better than quantum computers. IBM quantum computer developers state that quantum computing will probably never completely eliminate classical computers. Instead, in the future we may see a hybrid chip that relies on quantum transistors for certain tasks and classical transistors for others, depending on which one is more appropriate.
Read more:
Work from home: Improve your security with MFA – We Live Security
Posted: at 1:52 pm
Remote work can be much safer with the right cyberhygiene practices in place multifactorauthentication is one of them
If you happen to be working from home due to the COVID-19 pandemic, you should beef up your logins with Multi-Factor Authentication (MFA), or sometimes called Two-Factor Authentication (2FA). That way, you dont have to entrust your security to a password alone. Easy to hack, steal, leak, rinse and repeat, passwords have become pass in the security world; its time to dial in your MFA.
That means you have something besides just a password. You may have seen MFA in action when you try to log into your bank and you receive an access code on your smartphone that you must also enter to verify its really you who is logging in. While its an extra step, it becomes exponentially more difficult for bad guys to get access to your account, even if they have a password that was compromised in a breach or otherwise.
The good news is that MFA is no longer super-tough to use. Here, we look at a few different popular ways to use it. If you need to work remotely now and log into a central office to collaborate with co-workers, this is a nice way to beef up the security of those connections.
This means you have something like a key fob, security USB key or the like, which can be used to generate a very secure passcode thats all-but-impossible to break (unless you have a quantum computer handy). Nowadays, things like YubiKey or Thetis are available for less than US$50 and are very widely supported if youre logging into your own corporate office technology, online office applications and a host of other cloud applications. It means your normal login will ask for a password, but also the code generated by your device, which is often physically small enough to get lost in a pants pocket, so some folks hang them on their keychain for safekeeping.
Nowadays you probably carry a mobile device around most of the time, which is a good argument for using it to boost your MFA security stance. For example, you can download an authentication app such as Authy, Google Authenticator, or ESET Secure Authentication. Whatever you choose, make sure it has a solid history, security-wise, since it needs to reside on your smartphone, which we now know can become compromised as well, thereby undermining your other security efforts.
RELATED READING: Work from home: How to set up a VPN
Its worth noting that spam SMS messages on your smartphone can trick some users into voluntarily compromising their own accounts, so stay on the lookout if you use this. Of course, reputable mobile security software can help if youre concerned with security problems on the platform itself.
Its very hard to fake a fingerprint or retinal scan and make sure it offers a solid factor in MFA. Nowadays, lots of devices have built-in biometric readers that can get an image of your face from your smartphone taking your picture, or scan your fingerprint, so its not hard to implement this on a device you probably already have. Some folks steer away due to privacy concerns, which promises to be an ongoing conversation. Also, while you can reset a password, if a provider gets hacked it is notoriously difficult to reset your face (old spy movie plots, anyone?).
The important thing with MFA is that you pick one that suits your goals and one that is easy for you to include in your routine. I have a very good lock on my front door, but its very hard to use, so often my wife catches me leaving it open, which isnt very secure, is it? Good security you dont use cant protect you.
In the event of a breach, MFA can offer side benefits as well. If you are notified that your password is compromised, theres a very good chance they dont also have one of your other factors, so successful hack attacks should drop precipitously if MFA is correctly implemented. Use an MFA solution and enjoy technology more safely.
Read the rest here:
Work from home: Improve your security with MFA - We Live Security
Career navigation Be at the core or be at the edge – The Financial Express BD
Posted: at 1:52 pm
Radi Shafiq | Published: March 19, 2020 11:02:35
In 2009, for aspiring engineering students, electrical engineering was the best subject to study. By the end of 2014, it seemed to be computer science, now it seems to be data science / statistics. There is no way of telling someone about what is to come in five years. Maybe it is quantum computing, or maybe a new era emphasising mental well-being, maybe biochemistry, or philosophy suddenly takes the centre stage at every endeavour.
Today, the market is shifting in an ever-increasing pace. It is easy to feel lost while navigating a career, looking for the best path to climb the ladder. Young professionals are essentially trying to be good enough to be relevant and even vital in 20-30 years. However, most of the buzz-worthy careers today were not even around 10 years ago, and so how can one be preparing for something 20 years down the line?
Here the author found a framework of thinking very helpful. It can be called "Be at the core or be at the edge" framework of thinking about jobs. Every company has some core functions that are time tested and relatively stable - maybe for some it is manufacturing, for some it is the sales, for others it is field management. These functions have well defined roles, hierarchy, and history to go alongside it. If someone is good at this core work, the job is more secured for him or her with little probability of unpredictable troubles. A clear hierarchy means the career will also have defined progression, although at a predictable pace, with only seniors' moving out or up and company growth ending upcreating new spaces.
On the other hand, there are the functions at the edge of the company. These are new things, maybe a new data section, maybe a digital marketing wing, or a small research team that is yet to make an impact on the work. At the edge there are people who are often keeping a low profile, but being flexible to take initiatives in creative and new directions. They are introducing new programmes, exploring sudden new flow of value or revenue. They can often be deemed unnecessary by more of the core people in the organisation.
However, since this is a time with the maximum pace of change in market landscape, the people at the edge have the best chance of adapting to a new reality and introduce the necessary function that take the company to the next level. This can suddenly make the edge people become the core people - or at least become a vital support function for the core to survive and thrive. Think of the way that Adobe stopped regular software sales in favour of subscription services, or how newspapers more and more emphasise on web version over print, how all the TV shows now work overtime on YouTube clips.
The people who are overstretched into their core function and their way of doing things, can become stiff and slow to look into the new avenues, as looking into anything outside - can understandably feel like a waste of time. Why would anyone need to stop doing what makes the most money and instead dabble into stuff that has no proven market? This thinking binds them away from dynamic learning possibilities. And then sudden changes are brought about by one company, and in the aftermath - the whole market begins to adapt, and quickly changes the old core people's position in the market hierarchy. Suddenly market demands one to learn new tricks to stay relevant in the secure place of years.
Very often though, there is no harm in digging deep into the core of the company. It can be a very safe bet, as most businesses may not change so dramatically.
But, to reduce the risk of suddenly being left irrelevant at the market, it is best that everyone needs to invest a portion of their time working on projects at the edge of their organisation, or at the edge of their skill set -- all throughout their career. This flexibility will keep them in touch with the changing tides, and make sure that they can ride the wave, or at least not be taken by surprise when the change finally comes.
This thinking works at any stage of life, when the author was a student, he did digital art for just fun, but ultimately it helped him land the first three part time jobs, having those skills was a bonus on top of the studies. He had friends whose outside interests into videography while studying computer science ended up shaping some of their whole career. In the author's office, he has seen a colleague's occasional contribution to a new initiative becoming 50 per cent of her duty in a year's time - leading to a promotion and recognition.
So, think again, at the office, are you at the core or at the edge? Why not both? Keep learning. Keep creating.
Radi Shafiq is a development professional and artist. He can be reached at radi.iba@gmail.com
Continued here:
Career navigation Be at the core or be at the edge - The Financial Express BD
TensorFlow gets its quantum of solace, lid lifted on ‘all-seeing crime-detecting’ AI upstart, and more – The Register
Posted: March 17, 2020 at 5:44 am
Roundup Here's a handy little roundup of all the bits of AI news that you may have missed.
Uh oh, another surveillance company has secretly been purloining data from social media: Banjo, the AI startup that believes its software can detect and surface crimes and other activities in real-time from all kinds of data feeds, also scraped information from peoples public social media profiles.
However, it wasnt as brazen as Clearview, the controversial upstart known for downloading over three billion photos from Facebook, Instagram, YouTube, Twitter, and more to put together a massive dataset for facial recognition. Banjo apparently created a shadow company called Pink Unicorn Labs, according to Vice.
Pink Unicorn Labs went on to develop three apps directed at fans of things like the British boyband One Direction, EDM music, and Formula One racing. These apps asked users to connect and sign-in using their accounts on social media platforms like Facebook, Twitter, Instagram, Google Plus, FourSquare, as well as VK and Sina Weibo, commonly used in Russia and China. Linking the Pink Unicorn Labs Apps to peoples accounts makes it possible to scrape those netizens' data, such as images or location history.
Code across all the three apps contained links to Banjos website. Both companies were registered at the same address in Redwood City, California and headed by Banjos CEO Damien Patton.
Pink Unicorn Labs apps were removed from the Google Play Store in 2016. Even though data might be publicly posted on peoples accounts, scraping them to use for commercial purposes is against the terms of service of these platforms.
AI helps historians read messages carved on ancient bones: Researchers from Southwest University in China used a convolutional neural network to classify and read ancient scripts carved on bones dating back to more than 3,000 years between 1600 to 1046 BC.
The Chinese characters written in Yi script, the oldest examples show it was used in the Middle Kingdom from the 15th century. Studying these ancient texts is difficult; not only does it require extensive knowledge of the language and its history, but the messages imprinted on these bones are cracked and worn out over time.
Heres where the machine learning bit comes in. A convolutional neural network was trained on images of these texts where each character was labelled so it could recognize scripts carved on other types of bones, according to a paper published in IEEE Computer Graphics and Applications.
The researchers used a dataset consisting of 1,476 tortoise shell rubbings and 300 ox bone rubbings, from which they chose one-third as the test set and two-thirds as the training set. Experiment results show the proposed method reaches a level close to that of oracle experts, Synched explained this week.
As I said, classification is the first step,Shanxiong Chen, first author of the paper and an associate professor of computer and information science, told Synched.
This study specifically focused on telling between animal bones and tortoise shells, and were continuously working with Capital Normal Universitys Center for Oracle Bone Studies on further classifying different types of animal bones.
ICLR 2020 goes virtual: Tech conferences are dropping like flies amidst the current outbreak of the coronavirus. Now, the International Conference on Learning Representations (ICLR), a top academic machine learning conference, has decided to cancel its physical event due to take place in Addis Ababa, Ethiopia, next month.
Due to growing concerns about COVID-19, ICLR2020 will cancel its physical conference this year, instead shifting to a fully virtual conference, it announced this week. We were very excited to hold ICLR in Addis Ababa, and it is disappointing that we will not all be able to come together in person in April.
Organisers have called all academics with accepted papers to create a five minute video as presenting their work part of its virtual poster session. For those that were invited to give a talk, that video will be extended to 15 minutes and information should be conveyed in a series of slides. Workshops are a little trickier to put together; ICLR is currently contacting speakers to coordinate.
All registration fees and travel purchased for the conference will be reimbursed. Now, the price to attend the digital conference has dropped down to $50 for students and $100 for non-students.
New TensorFlow library! If youre bored at home and social distancing from all your friends, family, and colleagues then try this: TensorFlows latest library that allows you to build quantum AI models.
Your brain will probably turn to mush trying to understand and combine both quantum computing and machine learning. The library known as TensorFlow Quantum (TFQ) was built by folks over at Google, the University of Waterloo, X, and Volkswagen, to give developers tools to process data that could, theoretically, run on quantum computers.
We announce the release of TensorFlow Quantum (TFQ), an open-source library for the rapid prototyping of quantum ML models, the Chocolate Factory said this week. TFQ provides the tools necessary for bringing the quantum computing and machine learning research communities together to control and model natural or artificial quantum systems.
Sponsored: Webcast: Why you need managed detection and response
View post:
What Is Quantum Computing, And How Can It Unlock Value For Businesses? – Computer Business Review
Posted: January 27, 2020 at 8:48 pm
Add to favorites
We are at an inflection point
Ever since Professor Alan Turing proposed the principle of the modern computer in 1936, computing has come a long way. While advancements to date have been promising, the future is even brighter, all thanks to quantum computing, which performs calculations based on the behaviour of particles at the sub-atomic level, writes Kalyan Kumar, CVP and CTO IT Services,HCL Technologies.
Quantum computing promises to unleash unimaginable computing power thats not only capable of addressing current computational limits, but unearthing new solutions to unsolved scientific and social mysteries. Whats more, thanks to increasing advancement since the 1980s, quantum computing can now drive some incredible social and business transformations.
Quantum computing holds immense promise in defining a positive, inclusive and human centric future, which is what theWEF Future Council on Quantum Computingenvisages. The most anticipated uses of quantum computing are driven by its potential to simulate quantum structures and behaviours across chemicals and materials. This promise is being seen guardedly by current scientists who claim quantum computing is still far from making a meaningful impact.
This said, quantum computing is expected to open amazing and much-needed possibilities in medical research. Drug development time, which usually takes more than 10 to 12 years with billions of dollars of investment, is expected to reduce considerably, alongside the potential to explore unique chemical compositions that may just be beyond the limits of current classical computing. Quantum computing can also help with more accurate weather forecasting, and provide accurate information that can help save tremendous amounts of agriculture production from damage.
Quantum computing promises a better and improved future, and while humans are poised to benefit greatly from this revolution, businesses too can expect unapparelled value.
When it comes to quantum computing, it can be said that much of the world is at the they dont know what they dont know stage. Proof points are appearing, and it is seemingly becoming clear that quantum computing solves problems that cannot be addressed by todays computers. Within transportation, for example, quantum computing is being used to develop battery and self-driving technologies, while Volkswagen has also been using quantum computing to match patterns and predict traffic conditions in advance, ensuring a smoother movement of traffic. In supply chains, logistics and trading are receiving a significant boost from the greater computing power and high-resolution modelling quantum computing provides, adding a huge amount of intelligence using new approaches to machine learning.
The possibilities for businesses are immense and go way beyond these examples mentioned above, in domains such as healthcare, financial services and IT. Yet a new approach is required. The companies that succeed in quantum computing will be those that create value chains to exploit the new insights, and form a management system to match the high-resolution view of the business that will emerge.
While there are some initial stage quantum devices already available, these are still far from what the world has been envisaging. Top multinational technology companies have been investing considerably in this field, but they still have some way to go. There has recently been talk of prototype quantum computers performing computations that would have previously taken 10,000 years in just 200 seconds. Though of course impressive, this is just one of the many steps needed to achieve the highest success in quantum computing.
It is vital to understand how and when we are going to adopt quantum computing, so we know the right time to act. The aforementioned prototype should be a wakeup call to early adopters who are seeking to find ways to create a durable competitive advantage. We even recently saw a business announcing its plans to make a prototype quantum computer available on its cloud, something we will all be able to buy or access some time from now. If organisations truly understand the value and applications of quantum computing, they will be able to create new products and services that nobody else has. However, productising and embedding quantum computing into products may take a little more time.
One important question arises from all this: are we witnessing the beginning of the end for classical computing? When looking at the facts, it seems not. With the advent of complete and practical quantum computers, were seeing a hybrid computing model emerging where digital binary computers will co-process and co-exist with quantum Qbit computers. The processing and resource sharing needs are expected to be optimised using real time analysis, where quantum takes over exponential computational tasks. To say the least, quantum computing is not about replacing digital computing, but about coexistence enabling composed computing that handles different tasks at the same time similar to humans having left and right brains for analytical and artistic dominance.
If one things for sure, its that we are at an inflection point, witnessing what could arguably be one of the most disruptive changes in human existence. Having a systematic and planned approach to adoption of quantum computing will not only take some of its mystery away, but reveal its true strategic value, helping us to know when and how to become part of this once in a lifetime revolution.
The rest is here:
What Is Quantum Computing, And How Can It Unlock Value For Businesses? - Computer Business Review
The End Of The Digital Revolution Is Coming: Here’s What’s Next – Innovation Excellence
Posted: at 8:48 pm
by Tom Koulopoulos
The next era of computing will stretch our minds into a spooky new world that were just starting to understand.
In 1946 the Electronic Numerical Integrator and Computer, or the ENIAC, was introduced. The worlds first commercial computer was intended to be used by the military to project the trajectory of missiles, doing in a few seconds what it would otherwise take a human mathematician about three days. Its 20,000 vacuum tubes (the glowing glass light bulb-like predecessors to the transistor) connected by 500,000 hand soldered wires were a marvel of human ingenuity and technology.
Imagine if it were possible to go back to the developers and users of that early marvel and make the case that in 70 years there would be ten billion computers worldwide and half of the worlds population would be walking around with computers 100,000,000 times as powerful as the ENIAC in their pants pockets.
Youd have been considered a lunatic!
I want you to keep that in mind as you resist the temptation to do the same to me because of what Im about to share.
Quantum Supremacy
Digital computers will soon reach the limits of demanding technologies such as AI. Consider just the impact of these two projection: by 2025 driverless cars alone may produce as much data as exists in the entire world today; fully digitizing every cell in the human body would exceed ten times all of the data stored globally today. In these and many more cases we need to find ways to deal with unprecedented amounts of data and complexity. Enter quantum computing.
Youve likely heard of quantum computing. Amazingly, its a concept as old as digital computers. However, you may have discounted it as a far off future thats about as relevant to your life as flying cars. Well, it may be time to reconsider. Quantum computing is progressing at a rate that is surprising even those who are building it.
Understanding what quantum computers are and how they work challenges much of what we know of not just computing, but the basics of how the physical world appears to operate. Quantum mechanics, the basis for quantum computing, describes the odd and non-intuitive way the universe operates at a sub-atomic level. Its part science, part theory, and part philosophy.
Classical digital computers use what are called bits, something most all of us are familiar with. A bit can be a one or a zero. Quantum computers use what are called qubits (quantum bits). A quibit can also be a one or a zero but it can also be an infinite number of possibilities in between the two. The thing about qubits is that while a digital bit is always either on (1) or off (0), a qubit is always in whats called a superposition state, neither on nor off.
Although its a rough analogy, think of a qubit as a spinning coin thats just been flipped in the dark. While its spinning is it heads or tails? Its at the same time both and neither until it stops spinning and we then shine a light on it. However, a binary bit is like a coin that has a switch to make it glow in the dark. If I asked you Is it glowing? there would only be two answers, yes or no, and those would not change as it spins.
Thats what a qubit is like when compared to a classical digital bit. A quibit does not have a state until you effectively shine a light on it, while a binary bit maintains its state until that state is manually or mechanically changed.
Dont get too hung up on that analogy because as you get deeper into the quantum world trying to use what we know of the physical world is always a very rough and ultimately flawed way to describe the way things operate at the quantum level of matter.
However, the difficulty in understanding how quantum computers works hasnt stopped their progress. Google engineers recently talked about how the quantum computers they are building are progressing so fast that that they may achieve the elusive goal of whats called quantum supremacy (the point at which quantum computers can exceed the ability of classical binary computer) within months. While that may be a bit of stretch, even conservative projections put us on a 5-year timeline for quantum supremacy.
Quantum vs Classical Computing
Quantum computers, which are built using these qubits, will not replace all classical digital computers, but they will become an indispensable part of how we use computers to model the world and to integrate artificial intelligence into our lives.
Quantum computing will be one of the most radical shifts in the history of science, likely outpacing any advances weve seen to date with prior technological revolutions, such as the advent of semiconductors. They will enable us to take on problems that would take even the most powerful classical supercomputers millions or even billions of years to solve. Thats not just because quantum computers are faster but because they can approach problem solving with massive parallelism using the qualities of how quantum particles behave.
The irony is that the same thing that makes quantum computers so difficult to understand, their harnessing of natures smallest particles, also gives them the ability to precisely simulate the biological world at its most detailed. This means that we can model everything from chemical reactions, to biology, to pharmaceuticals, to the inner workings of the universe, to the spread of pandemics, in ways that were simply impossible with classical computers.
A Higher Power
The reason for the all of the hype behind the rate at which quantum computers are evolving has to do with whats called doubly exponential growth.
The exponential growth that most of us are familiar with, and which is being talked about lately, refers to the classical doubling phenomenon. For example, Moores law, which projects the doubling in the density of transistors on a silicon chip every 18 months. Its hard to wrap our linear brains around exponential growth, but its nearly impossible to wrap them around doubly exponential growth.
Doubly exponential growth simply has no analog in the physical world. Doubly exponential growth means that you are raising a number to a power and then raising that to another power. It looks like this 510^10.
What this means is that while a binary computer can store 256 states with 8 bits (28), a quantum computer with eight qubits (recall that a qubit is the conceptual equivalent of a digital bit in a classical computer) can store 1077 bits of data! Thats a number with 77 zeros, or, to put it into perspective, scientists estimate that there are 1078 atoms in the entire visible universe.
Even Einstein had difficulty with entanglement calling it, spooky action at a distance.
By the way, just to further illustrate the point, if you add one more qubit the number of bits (or more precisely, states) that can be stored just jumped to 10154 (one more bit in a classical computer would only raise the capacity to 1078).
Heres whats really mind blowing about quantum computing (as if what we just described isnt already mind-blowing enough.) A single caffeine molecule is made up of 24 atoms and it can have 1048 quantum states (there are only 1050 atoms that make up the Earth). Modeling caffeine precisely is simply not possible with classical computers. Using the worlds fastest super computer it would take 100,000,000,000,000 times the age of the universe to process the 1048 calculations that represent all of the possible states of a caffeine molecule!
So, the obvious question is, How could any computer, quantum or otherwise, take on something of that magnitude? Well, how does nature do it? That cup of coffee youre drinking has trillions of caffeine molecules and nature is doing just fine handling all of the quantum states they are in. Since nature is a quantum machine what better way to model it than a quantum computer?
Spooky Action
The other aspect of quantum computing that challenges our understanding of how the quantum world works is whats called entanglement. Entanglement describes a phenomenon in which two quantum particles are connected in such a way that no matter how great the distance between them they will both have the same state when they are measured.
At first blush that doesnt seem to be all that novel. After all, if I were to paint two balls red and then separate them by the distance of the universe, both would still be red. However, the state of a quantum object is always in whats called a superposition, meaning that it has no inherent state. Think of our coin flip example from earlier where the coin is in a superposition state until it stops spinning.
If instead of a color its two states were up or down it would always be in both states while also in neither state, that is until an observation or measurement forces it to pick a state. Again, think back to the spinning coin.
Now imagine two coins entangled and flipped simultaneously at different ends of the universe. Once you stop the spin of one coin and reveal that its heads the other coin would instantly stop spinning and also be heads.
If this makes your head hurt, youre in good company. Even Einstein had difficulty with entanglement calling it, spooky action at a distance. His concern was that the two objects couldnt communicate at a speed faster than the speed of light. Whats especially spooky about this phenomenon is that the two objects arent communicating at all in any classical sense of the term communication.
Entanglement creates the potential for all sorts of advances in computing, from how we create 100 percent secure communications against cyberthreats, to the ultimate possibility of teleportation.
Room For Possibility
So, should you run out a buy a quantum computer? Well, its not that easy. Qubits need to be super cooled and are exceptionally finicky particles that require an enormous room-sized apparatus and overhead. Not unlike the ENIAC once did.
You can however use a quantum computer for free or lease its use for more sophisticated applications For example, IBMs Q, is available both as an open source learning environment for anyone as well as a powerful tool for fintech users. However, Ill warn you that even if youre accustomed to programming computers, it will still feel as though youre teaching yourself to think in an entirely foreign language.
The truth is that we might as well be surrounded by 20,000 glowing vacuum tubes and 500,000 hand soldered wires. We can barely imagine what the impact of quantum computing will be in ten to twenty years. No more so than the early users of the ENIAC could have predicted the mind-boggling ways in which we use digital computers today.
Listen in to my two podcasts with scientists from IBM, MIT, and Harvard to find out more about quantum computing. Quantum Computing Part I, Quantum Computing Part II
This article was originally published on Inc.
Image credit: Pixabay
Choose how you want the latest innovation content delivered to you:
Tom Koulopoulos is the author of 10 books and founder of the Delphi Group, a 25-year-old Boston-based think tank and a past Inc. 500 company that focuses on innovation and the future of business. He tweets from @tkspeaks.
Read this article:
The End Of The Digital Revolution Is Coming: Here's What's Next - Innovation Excellence