A lot of excitement, and a fair amount of hype, surrounds what artificial intelligence (AI) can do for the EDA industry. But many challenges must be overcome before AI can start designing, verifying, and implementing chips for us. Should AI replace the algorithms in use today, or does it have a different role to play?
At the end of the day, AI is a technique that has strengths and weaknesses, and it has to be compared against those already in use. Existing algorithms and tools have decades of data and experience that have been encapsulated in them, and little of that knowledge is available in a form that could train new AI models. That puts these data-hungry techniques at a serious disadvantage — one that will take time to overcome. But AI does not need that type of data for all tasks. Some tasks produce copious amounts of relevant data, and those are the ones in which early results are showing a lot of promise.
The question always has been how much of the available data is useful? “There are millions of ways to apply machine learning,” says Dan Yu, product manager for ML Solutions in IC Design Verification at Siemens EDA. “Machine learning is a generic technology, and it really depends on how people perceive the problem, how people want to use data to solve a problem. Once we have more data, and once we pay attention to the quality of data, then later we will get smarter and smarter AI models. It takes time.”
The role of EDA within the chip industry is fairly clear, and EDA has embraced machine learning, which is a subset of EDA. But how the broader AI fits into EDA is far less obvious. “We turn to EDA when it can help you get your job done, or get the task done faster or better,” says Dean Drako, president and CEO of IC Manage. “The two things that we want from the EDA industry are to design faster and make it better. We’re all human, and we make mistakes. People have issues, they get sick, and they only want to work 8 or 10 hours a day. If you can have AI do something either better, or make the person more productive, then that’s a big win. It’s about productivity. If AI makes me productive, that’s where I win.”
The EDA industry has to find the right tasks that provide those benefits, and which are possible with the data available today. “AI has a good chance of success when trying to replicate a task that a human is good at,” says Thomas Andersen, vice president for AI and machine learning at Synopsys. “Place-and-route is not a task a human would be good at. Any simple algorithm would beat a human because the human is not very good at placing things at this quantity, in the same way a simple calculator would beat most humans. A human would beat it in much more creative tasks and much more complex tasks — cognitive types of things.”
AI is just another algorithm in the arsenal, and machine learning is a subset of that. “Linear regression is now being considered as being machine learning, but people have been using regression for a long time,” says Michal Siwinski, CMO for Arteris IP. “It’s a statistical concept that goes back a long time. Fundamentally, these are different types of algorithms, and it is a matter of finding which algorithm becomes more efficient for a given task. Before we had convolution neural networks, it was really hard to effectively address aspects of vision and aspects of large language models.”
To make any of this work requires the right data. “AI and machine learning are basically learning from historical data, from the data we have accumulated,” says Siemens’ Yu. “It condenses knowledge from the data. What is important is the data you feed into your AI model to train it, to extract the knowledge effectively. You use that knowledge to predict what would happen if I’m given a new case. AI is basically saving us the effort to replicate unnecessary work.”
But a lack of data can create issues. “Care has to be taken with AI because it has a problem with outliers and glitches,” says Marc Swinnen, director of product marketing at Ansys. “You cannot rely on it to always give you a good answer. Design tasks may be more acceptable because you need fast turnaround and iterative calculations during placement or routing. You may leave outliers until later verification stages, where it is easier to fix it using an ECO than to try and consider all corner cases every time a decision is made — especially when they are rarely applicable. At sign-off, the whole point is to catch the outliers.”
Optimization functions rely on a cost function. “Deep learning, or any other machine learning techniques, are basically minimizing a given cost function,” says Arteris’ Siwinski. “That’s how the math behind it works. The cost function is constrained by how you defined what success is, and what the parameters look like.”
The data problem
The number one problem is not computing power or the model. It’s the data. “If you only train your model on the current design, then AI’s knowledge is very limited,” says Yu. “Your success rate depends on how much data you have accumulated. If you have a series of design, incremental or derivative designs, that will help. Maybe you are a design house, and you design for many customers. If you have trained a model for design A and you know design B will only have slight modification, then you can re-use the model. Now your success rate would be much higher. Some verification engineers are more experienced and can transfer what they learned from previous projects to a new project. The same is true here. We need the data to train the right model.”
With AI, an algorithm typically is trained using a broad set of data to create a model, which then can be highly optimized for performance and power on the inferencing side. “We’re taking completely trained models, some of which are proprietary to the customer, that are designed for the specific use cases they want,” said Paul Karazuba, vice president of marketing at Expedera. “So rather than just using a general-purpose device, the customers have specific things they want to do, and they’re looking to us to process them as optimally as it can be done. Our architecture was designed with the intention of being scalable, but also to be optimized.”
The bigger challenge for EDA is how to get data that spans the entire industry in order to automate some of those steps. “The semiconductor design industry is going to be one of the industries that finds that the hardest,” says IC Manage’s Drako. “Designs are severely protected and coveted. Even the design rules at TSMC are closely guarded secrets, and they try to encrypt them. No one under the sun wants any of their design data to go anywhere outside of their company. We’re going to have a hard challenge as an industry. I do believe that eventually we will overcome it. We did it for synthesis, and for place-and-route. The tools have seen many designs because every time there is a bug, we give the data to the EDA company so they can fix the bug.”
Some are more positive, particularly when it comes to machine learning. “The semiconductor industry is very rich in data,” says Siwinski. “You have so many designs, so many instances of SoCs being designed every year and new generations, derivatives, similar things with different architectures, millions and millions of test vectors running through hundreds of corner cases to be checked. That means it’s a great place for machine learning, because machine learning is not really about the algorithms. That’s the easy part. It is about being able to frame the problem statement you’re trying to solve with the right data to support it. If you can frame that properly, you can absolutely be using machine learning.”
That also can be done using less data and data sharing, which is more of a problem with AI than its machine learning subset. “I cannot imagine that because then there would be no competitive advantage for anybody anymore,” says Arvind Narayanan, senior director for product line management within Synopsys. “Across all industries, there are sometimes a number of players that create a consortium to share technology. Will the whole industry come together to essentially combine all the information they have? I just don’t see it. I don’t see that coming because everybody is extremely protective of their IP, and I understand why they are.”
Data sharing makes everyone jumpy. “There is a lot of cooperation that goes on,” says Drako. “It’s not talked about a lot, because it makes all of the chip companies very nervous. In the case of training of models, it gets harder because the vendor is asking for the data to be kept for a longer time period. There’s going to be a lot of problems with it.”
There also are data validity and consistency issues to consider. “If you apply the same process used to create ChatGPT to the chip design world, I cannot just use any RTL that somebody has written before,” says Synopsys’ Andersen. “There needs to be a quality component of it. I need to know not only that this RTL is good, but also which RTL is good for what purpose. There might be different requirements in terms of QR or functionality.”
Risk aversion
The semiconductor industry always has been risk-averse. “AI is going to make lots of mistakes because the training data or the model and the solution is new and unproven,” says Drako. “However, AI will give the same answer, right or wrong, consistently. Humans don’t do that. Once I get my model and I prove that it’s accurate enough for my task, I can be assured that it’ll be accurate enough from then on. The problem with the human is that, if I train somebody and they’re accurate enough for the first month of the first year, I’m still going to get mistakes in the second and third and fourth year. Maybe talking about mistakes isn’t the right thing. Maybe consistency is the right way to think about it.”
One way to avoid the problem initially is to concentrate on optimization. “By design, an optimization system cannot do worse than your reference design,” says Andersen. “You may send it in the wrong direction with the wrong inputs, and then it searches the wrong space and will never find you a better result. All results that are worse will automatically be discarded.”
Mistakes do happen. “The results you get are related to how much effort you put into building it,” says Siwinski. “It’s very effort-intensive to do it properly. If you’re asking the wrong questions, or if you give the wrong data, then the results are not going to be very good. You need to understand it just to ask the right questions. How do you look at the data sets? How do you partition how you do it? It’s an art and a science to do this properly.”
You have to understand when errors cannot be tolerated. “There is a limit to how much we can rely on AI,” says Ansys’ Swinnen. “It does play a significant role in things like optimization, but also in things like thermal analysis. For example, when working with variable size meshing, we need an algorithm that quickly determines where the likely hot spots are, and then we can build meshes that are much tighter where we know we need them and looser where not required. That allows us to speed up the whole process significantly by using AI intelligence to identify which areas need to be concentrated upon, but at the end — the calculations have to be exact.”
Generative EDA
While the subject of AI is popular, it is generative AI that is the hot technology today. People are asking when AI will be able to generate Verilog or replace constrained random test pattern generation. “People are using AI to write software programs,” says Drako. “It is improving their productivity because it’s taking some of the drudgery out of it. I need to write a program to do X, and it’s not quite what I wanted, but if I change this, fix this, move this over here, then boom, it’s pretty good. So it increases productivity, and we’ll see it used very effectively in that method or in that manner in what I’ll call a design or creative industry.”
But the semiconductor industry is not as driven by productivity as by software. “As exciting and as sexy, as eye catching as some of this stuff is right now — and the hype over the last two months has been pretty high — the reality is that a lot of these models have a long way to go,” says Siwinski. “Can we get some image manipulation and creation? Absolutely. Music and other things related to language models, yeah, those are getting pretty sophisticated. Is that the same as being able to create advanced code that is going to be secure, that is going to be safe, and that is not going to have some of the challenges as IP reuse? There are places where people can get libraries of things, which is great, but they’re not necessarily what I would deploy in high-performance programs, where you need to have high reliability, high security.”
Again, it comes back to training. “Where does that power come from?” asks Yu. “The power comes from a lot of data being fed into training. OpenAI didn’t disclose the amount of data they have used to train the most recent GPT-4, but I know that for GPT-3 they used several billion tokens, and that means they have collected all the data from Wikipedia, from openly accessible web pages, and from many books and publications. That is where the intelligence comes from. It trained on GitHub, as well. So it has a lot of GitHub fed into the language model. When you look at EDA problem, do we have access to so much data to properly train a powerful model?”
Yu recently published a paper that provides the figures shown below. As a comparison in August 2022, there were 14,197,122 images in 21,841 categories used to train ImageNet.
Fig. 1: Data from “A Survey of Machine Learning Applications in Functional Verification” by Dan Yu, Harry Foster, and Tom Fitzpatrick of Siemens EDA. Source: DVCon 2023
There are some early attempts. “You can tell AI to create RTL but is it going to be the most optimized RTL that will satisfy the PPA requirements?” asks Narayanan. “We are not there yet. It will spit out the logic function that you’re looking for, but the second step is how you optimize it. How do you take it to the next level? That’s work in progress.”
As an industry, we do have some experience in this already. “The danger with things like language models is you may spend more time debugging poorly written RTL than you would have taken to write it,” notes Andersen.
This is similar to the early days of IP reuse where huge amounts of poor RTL flooded the market. “Even if AI gives us the RTL, we still have to do the quality check,” says Yu. “Perhaps that could also be automated in the future. We also have to integrate that design with other pieces and make sure the new design works as a whole. There are many steps until some model could produce a complete design.”
Conclusion
Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.”
“AI may be wondrous, but it is not magic,” says Siwinski. “It’s just science and math. Machine learning is just another tool that is very data-dependent, and you have to ask the right questions. But it’s something that everybody should be embracing because it is going to be 100% pervasive.”
While EDA is adopting machine learning and other pieces of true AI, it is not ready to throw away many of the existing algorithms. “Machine learning is not a drop-in replacement for our existing algorithms or tools,” says Yu. “They are helping us to accelerate thing that were not very efficiently. They are helping to automate some processes where people were in the loop. Those are tasks where machine learning can help. Sometimes machine learning also can improve our previous primitive algorithms, make them more accurate.”
Generative EDA, meanwhile, may have to wait a little longer. “It’s unclear how this is going to play in our industry, which is very risk-averse,” says Drako. “AI will be used in design stuff where it is checked by humans and gives humans a template to start with, and then they can move forward more effectively, more quickly. Our industry wants surety. Eventually, we’ll get models that are trained well enough where we’ll get that surety.”
FAQs
What are the main biggest challenges for AI adoption? ›
- Ethical Considerations. ...
- Poor Data Quality. ...
- Data Governance. ...
- Process Deficiencies. ...
- Cybersecurity. ...
- Storage Limitations. ...
- Regulatory Compliance. ...
- The Way Forward.
- Insufficient Or Low-Quality Data. AI systems function by being trained on a set of data relevant to the topic they are tackling. ...
- Outdated Infrastructure. ...
- Integration Into Existing Systems. ...
- Lack Of AI Talent. ...
- Overestimating Your AI System. ...
- Cost Requirements.
AI projects typically take anywhere from three to 36 months depending on the scope and complexity of the use case. Often, business decision makers underestimate the time it takes to do “data prep” before a data science engineer or analyst can build an AI algorithm.
What is the failure rate of AI adoption? ›The rate of AI project failure to be at around 70-80%.
What is the most common problem with AI solution? ›- Determining the right data set. ...
- The bias problem. ...
- Data security and storage. ...
- Infrastructure. ...
- AI integration. ...
- Computation. ...
- Niche skillset. ...
- Expensive and rare.
However, despite its many advantages, there are also several limitations to the technology that must be taken into consideration… Some of these limitations include the lack of common sense, transparency, creativity, emotion and safety and ethical concerns.
What is the biggest threat of AI? ›- Automation-spurred job loss.
- Privacy violations.
- Deepfakes.
- Algorithmic bias caused by bad data.
- Socioeconomic inequality.
- Market volatility.
- Weapons automatization.
- Compatibility. Compatibility is the ability to support AI technology by the organization's existing IT capability. ...
- Relative Advantage. ...
- AI Readiness. ...
- Leadership Vision. ...
- Change Management Capability. ...
- Competitive Pressure. ...
- Trading Partners.
AI adoption may be slow because it is not yet useful, or because it may not end up being as useful as we hope.
What are the 5 factors that affect technology adoption? ›Way back in 1962, Everett Rogers, fondly known as the father of the Diffusion of innovations theory, listed five attributes which affect the rate of adoption: relative advantage, compatibility, complexity, trialability and observability.
What are obstacles to adoption of technology? ›
- Value. Product value awareness plays a crucial role in differentiating our offerings from other products on the market. ...
- Utilization. ...
- Risk. ...
- Total cost of ownership (TCO) ...
- Return on investment (ROI) ...
- IT chargeback. ...
- Leadership. ...
- Culture and processes.
Enormous returns on investment in AI are possible, but the sad fact is that implementing AI is not easy. Most projects fail because of under-investment, or because of mis-understandings about what it is actually capable of.
Is AI easy to implement? ›Implementing AI is a complex process that requires careful planning and consideration. Organizations must ensure that their data is of high quality, define the problem they want to solve, select the right AI model, integrate the system with existing systems, and consider ethical implications.
How hard is it to create AI? ›Extensive Programming: AI requires intensive programming. You need to learn coding to program computers to make decisions for themselves. Data Proficiency: Machines need a lot of data to learn from to become proficient at a task. This can be difficult to obtain, especially if you're starting out.
Why 85% of AI projects fail? ›Gartner states that 85% of AI projects fail due to unclear objectives and obscure R&D project management processes.
Why do most AI projects fail? ›Unclear Business Objectives
AI projects usually fail because of poorly defined goals, lack of data and insufficient resources.
Somewhere between 60-80% of AI projects are failing according to different news sources, analysts, experts, and pundits.
What are the 4 main problems AI can solve? ›- Healthcare.
- Wildlife Conservation.
- Learning and Training.
- Transportation.
- Hiring.
- Renewable Energy Sector.
- Research and Development.
- Logistics and Operations.
AI cannot create, conceptualize, or plan strategically. While AI is great at optimizing for a narrow objective, it is unable to choose its own goals or to think creatively. Nor can AI think across domains or apply common sense.
What is the hard problem of AI? ›The hard problem of AI is therefore how an AI finds the right problems to solve. As was postulated for philosophy of mind by Chalmers, we can solve all the easy problems of AI and have a perfect problem solving machine without having a true AGI or ASI.
What are the two problem in AI? ›
Notwithstanding the tangible and monetary benefits, AI has various shortfall and problems which inhibits its large scale adoption. The problems include Safety, Trust, Computation Power, Job Loss concern, etc.
What are the five components of AI problem? ›As such, the five basic components of artificial intelligence include learning, reasoning, problem-solving, perception, and language understanding.
What is the Goldilocks rule of AI? ›If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal 'Goldilocks' zone of credibility.
What is Elon Musk's warning about AI? ›“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction,” Musk said in his interview with Tucker ...
What is Elon Musk worried about with AI? ›Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity. They have signed an open letter warning of potential risks, and say the race to develop AI systems is out of control.
What is the potential problem of AI? ›Without the proper considerations, AI could lead to bias on the basis of race or gender, inequality, human job loss, and, in extreme cases, even physical harm. In the second article of our two-part series, we'll examine some of the most commonly raised concerns about artificial intelligence and the risks it poses.
What are the three advantages and three disadvantages of AI? ›The advantages range from streamlining, saving time, eliminating biases, and automating repetitive tasks, just to name a few. The disadvantages are things like costly implementation, potential human job loss, and lack of emotion and creativity.
What are the three drivers of artificial intelligence and adoption? ›The need to boost customer experience, employee efficiency and to accelerate innovation are the three main factors driving an increase in AI adoption.
How can AI increase adoption? ›Gain an Understanding of AI Technologies Before Committing Resources. Understand the ROI of AI and Risks Associated With Implementing Artificial Intelligence Into Your Business Model. Build Relationships With Experts Who Specialize in AI Adoption and Implementation. Approach the Project in an Agile and People-Centric ...
What are some reasons that AI fails? ›What a disappointment — especially when you think about how much time, effort, and money goes into AI adoption. Research shows that there are various reasons why AI projects fail: including the lack of proper skills, limited understanding of the tech within the company, budget limitations, and so on.
What are the 7 issues of adoption? ›
The classic "Seven Core Issues in Adoption," published in the early 1980s, outlined the seven lifelong issues experienced by all members of the adoption triad: loss, rejection, guilt and shame, grief, identity, intimacy, and mastery/control.
What are the 4 forces that influence product adoption? ›- Highlight the imperfections of your potential customer's current product.
- Show how well your product solves their issues.
- Decrease the fear of change and explain how they can easily switch to your solution.
- Decrease the potential customer's attachment to their current product.
- Relative advantage.
- Trialability.
- Observability.
- Compatibility, and.
- Complexity.
Fear of Job Losses
This is, by far, the biggest fear that people have about Artificial Intelligence. People are afraid that AI will take over their jobs, making them redundant in the process and resulting in industry-wide job losses.
A big disadvantage of AI is that it cannot learn to think outside the box. AI is capable of learning over time with pre-fed data and past experiences, but cannot be creative in its approach.
Is AI expensive to implement? ›The cost to develop and maintain the software can be extraordinarily high. Nvidia makes most of the GPUs for the AI industry, and its primary data center workhorse chip costs $10,000.
What are the factors to consider when implementing AI? ›Data-Centricity: One of the main factors that impact AI/ML implementation is the availability, quality, quantity and computing power of data. However, the problem with data is that its quality and quantity are rarely consistent due to the absence of standardized data practices across industries.
What is required to implement AI? ›To fully take advantage of the opportunities presented by AI, organizations need sufficient performance computing resources, including CPUs and GPUs. A CPU-based environment can handle basic AI workloads, but deep learning involves multiple large data sets and deploying scalable neural network algorithms.
How long would it take to build an AI? ›Simple AI systems can be built in a matter of weeks, while more complex systems can take months or even years to build. Your prior experience and knowledge: If you have a background in computer science, machine learning, and data science, you will be able to build your AI system more quickly and efficiently.
How long does it take to make a simple AI? ›AI projects typically take anywhere from three to 36 months depending on the scope and complexity of the use case. Often, business decision makers underestimate the time it takes to do “data prep” before a data science engineer or analyst can build an AI algorithm.
How quickly is AI developing? ›
90% of AI experts believe human-level AI could exist within the next 100 years.
What are the challenges of machine learning adoption? ›Machine learning has the potential to revolutionize the way businesses operate, but there are several challenges that are hindering its adoption. These challenges include a lack of data quality, talent shortage, regulatory concerns, explainability, and bias.
What are the factors influencing AI adoption? ›- Research Question.
- Model Design.
- Model Development and Hypotheses Formulation.
- Study Details.
- Data Analysis.
- Results and Discussion.
- Conclusion, Limitations, and Future Research.
- Declaration of Conflicting Interests.
Your company lacks the appropriate data
The only way to build and train effective AI is with a sufficient amount of high-quality data. And the better the data, the better the outcomes.
A lot of reluctance to adopt new technology lies in a misunderstanding of what technology can actually include, images of futuristic machines and robotics tend to come up. For example, undergoing technological change could mean replacing your computer systems or switching from Microsoft Word to Google Docs.
What are the adoption challenges and issues? ›- Separation from the Birth Family. ...
- Preadoption Terminations. ...
- Difficulties of Attachment. ...
- Resilience of Trauma. ...
- Conflicting Upbringings. ...
- Quest for Identity. ...
- Perceptions of Others. ...
- Disrupted Adoptions.
Conclusion. In conclusion, AI has both advantages and disadvantages that we need to consider as it becomes more prevalent in our daily lives. AI can improve efficiency, reduce error rates, and provide better customer experiences, but it may also lead to job displacement, biased algorithms, and security risks.
What are three major issues for the use of AI and robots in business? ›Three major issues for the use of AI and robots in business include cost, security, and ethical considerations. b. Yes, these three issues are common across businesses today.
What are the 7 problems of adoption? ›The classic "Seven Core Issues in Adoption," published in the early 1980s, outlined the seven lifelong issues experienced by all members of the adoption triad: loss, rejection, guilt and shame, grief, identity, intimacy, and mastery/control.
What is the biggest challenge in adopting new technologies? ›Lack of vision and resources and resistance to change are the two biggest barriers to technology adoption.
What is the disadvantage of adopting? ›
However, some drawbacks of adoption that adoptees may experience can include: Removing a child from his or her biological parents, even at birth, can be a traumatic event. While they gain their new family, an adopted child loses the experience of being raised by their biological family.