ChatGPT for Hardware Design
docs.flux.aiPCB Designer on the team:
Holy cow this is awesome! The integration of project context with an LLM enabled more than the sum of its parts when it comes to information access.
When designing circuits for fast iterations, you're constantly managing risk and time expenditure; this leads to copying a lot of "flight heritage" circuits so to speak.
With the LLM, I'm able to get highly condensed starting points for better circuits when it comes to simplicity, understandability, or performance. As of right now, all I have to do is fact check the solution which is much easier than synthesizing one myself. I'm citing the example where I got it to design me a viable high gain analog frontend that meets my gain, bandwidth, noise, and biasing requirements.
I have no idea what you're talking about, but my experience 100% mirrors yours in most cases as a programmer.
GPT-4 produces a lot of my code now.
I'm leap-frogging my team a bit in productivity because they still don't like it, but it's so close to being undeniable.
I’ve been unable to replicate this. Could you please show me an example? I ask everyone who makes this claim and have yet to see a concrete example. I just can’t get it to do anything useful for me. I feel like I’m missing the boat!
It wont write your entire program, and you have to already know enough code to know when it gave you garbage, but.. I find I can have it tackle small chunks and in some cases even glue them together in a usable way. It can often remind me of stratigies I would not have thought to use, good or bad. It can also do some basic debugging, including seeing things my tired eyes often miss. That said, you kinda already need to be able to code or you wont know the wheat from the chaff. It feels a lot like managing a remote esl person you will never meet.
I don't get this. I asked GPT-4 to help me build a machine learning program for historical stock prices. It spat back about 8 errors, functions that don't exist, nothing compiled, multiple logic flaws...
Which one of us is lying?...
Probably neither of you? I've had better luck coding with ChatGPT when I ask it very concise questions, mainly at the function or small class level, but still asking for enough functionality to feel satisfied. It helps to give it a "shell" of some code, or a starting point of code to build from. I also make sure to ask it not to include 3rd party libraries (I'd consider those under separate discussions with it). It's truly helpful. And that's saying nothing about copilot integration in the IDE which is sooo good.
None of you are. You are using the tool improperly. The tool cannot spit out entire applications yet, especially if all you told it was the final outcome. I see GPT-4 as a junior programmer in my team who is unusually productive. It cannot do the design for me, but executing it is a piece of cake for it.
> The tool cannot spit out entire applications yet,
Yesterday I had a ticket at my job to extend some functionality in a code base that was probably 200 files and 100+ lines of code in each file, and that's before any `import` references to other libraries.
How can you feed all of these tokens to GPT-4 in a cost effective way so that it knows about your application well enough to recommend/pull off code completion at a human-like level?
I'm working on exactly this. I think we can use embeddings and langchain agents to do this.
The largest models like GPT4 have the interesting property to really, really finish what you started. If you start with flaws of any kind, it will continue to produce them. The inverse is true as well.
This is an actual thing[1] and it’s something larger models are actually worse (better?) at. They score higher and higher on the loss function (did I predict correctly), but their utility (does it work) goes down.
Just thought it was noteworthy.
> If you start with flaws of any kind
I asked it to start and it provided me the flaws :D I can share the prompt if you'd like.
Please do, I’m curious.
Asking GPT-4 to make you an entire application as a complete amateur is absurd and shows you don't understand the context window.
Maybe he just writes really really really simple code all day and faked his way into a developer position? And then his team mates have to fix/redo anything he does?
I've seen it happen several times.
Wrong.
I’ve been interested in learning how to design PCBs, I wonder if this tech helps constrain the set of things one ought to focus on learning, with the assumption that AI can fill in other areas.
I love teaching PCB design! As with you, I'm also really curious about AI's contributions to learning. My opinion is that the automation of search and execution increases the impact one person can make so more time can be spent on connecting the dots at a higher level. Whether or not education catches up in time is another story haha. It's pretty open ended right now but it's undeniable that a huge educational gap there.
This is what's been holding me back from PCB design. Like, I know the math, but there's forces at play beyond just V1/R1=V2 kinda stuff. I'm really keen to see if an LLM can assist a hobbyist in making sound, quality, designs that aren't going to break his/her wallet when the sheet comes in and none of it works.
About 12 years ago I designed an RF addon board for an RC car (to toggle lights). Even something as simple as switching with NPN transistors looks good on the bench software but when the chips came it didn't work at all. Did I mistakenly ask for PNP? Ugh! I used fritzing at first (still great software) and then upgraded to Eagle. I want to do something beyond Raspberry Pi/Arduino work but not quite ready to design my own radio/motherboard. I know electricity but I don't know why I would need a capacitor/resistor/transformer other than the very basics.
What resources would you recommend since you love to teach PCB design?
This XSeries on edX [0] looks promising for some of the prerequisite knowledge. Then for PCB design specifically, there are specialty courses on Udemy, for example, "KiCad Like a Pro" [1].
[0] https://www.edx.org/xseries/mitx-circuits-and-electronics
Have you tried KiCad? There are some interesting courses on Udemy about using it for PCB design.
Here is nice demo video
How often is it able to get it right first try?
I'll interpret "right" as in giving correct information. I found that it depends on the type of question asked; if it's somewhat general information such as design concepts and design topologies, it impressively nails it 99% of the time for my limited testing period. The main area it gets things wrong relates to very specific information that has only one source: e.g. datasheet specs. I heard the team already has plans to mitigate hallucinations on this front.
Also, I find myself having a conversation as I would with a real person, so it's less of spitting out the right answer for any technical question I might have, but more of a design buddy to consult when you're blocked on your search through the massive electronics solution space. Of course, this is a very preliminary use case and I can't wait to see what other people use it for.
Engineer on the team here. Some of the interesting challenges of doing ChatGPT for circuit design were:
1) How to communicate the circuit connections to ChatGPT in the text prompt. We ended up just using the standard netlist format. Apparently it knows about netlists from its training data. It also knows about datasheets.
2) How to get the needed accuracy and precision for questions like "calculate the resistance needed between component A and B". For that, we adopted the popular langchain framework to integrate specific "tools" into the AI for specific recognizable problems.
Out of curiosity, I noticed in the video that it recommended for a voltage regulator either an LM7805 or an LM1117 - either a super inefficient 5V regulator that has no business in new designs, or a 1.2V low-dropout 800mA regulator that is probably not appropriate to the other components it recommended (i.e. an LM7805 -> LM1117 would be 36% efficient at best).
It then also recommends a LiPo battery with a nominal 3.7V which obviously wouldn't be compatible with its LM7805 recommendation.
It also recommends an ESP8266 which is a 3.3V part so neither of the recommended regulators, nor the battery alone, would be appropriate.
Also, it recommends a 168MHz STM ARM micro - but it already has an ESP8266 with GPIO.
I'm not sure what it's suggesting I build but it doesn't really sound like a working environmental radiation logger ;)
How should I think about this?
In this case it seems to be giving examples of common ICs that are voltage regulators - these answers to high level questions are pretty vague by default.
We've had good results by asking follow up questions like "what power supply IC could I use to power a 3.3v microcontroller from a 3.7V nominal LiPo battery over the whole battery voltage range?".
I asked that question and I got: LM3940 and AMS1117-3.3 (LDO, high drop out but probably ok for low current - one isn't in stock but it doesn't have stock info yet), and TPS62203 and ADP2108 (bucks, both seem appropriate). These answers should get better as Copilot gets access to more and more tools.
btw I hope my comment didn't come across as too negative, like many folks I'm still trying to frame my own thinking around this kind of tooling. Thanks for the follow-up.
Not at all. I thought it was salient and very much what I wanted to know about. It seems there will be a heck of a lot of comments saying 'this is so awesome!' and this isn't a field where I'm able to look at the answer and immediately see the list of howling errors and hallucinations, so I'm very glad somebody can.
It's all very well expecting the hallucinations, but specificity beats Ludditeism every day. My gut reaction was 'ho boy, I'd better not trust this' but I didn't have the domain knowledge to understand why the GPT advice was dangerous. You did.
We are all good
I look forward to following your work. I don't know what the ideal integration of generative AI and hardware design is, but I do feel like there are opportunities. Excited to see what y'all cook up.
Thanks!
I believe we are extremely early…and so this is just a first baby step
Much more to come from us in the coming weeks
My god.. if you gave it real time stock updates to adjust BoM answers to availability that would be amazing!
Brace for impact!
We have a bunch of features that didn’t make the cut for todays launch but should be available shortly
I find comments similar to yours about ChatGPT all over the internet. I finally took the time to ask that some questions over the last few days. All the answers it gave were mundane and expected from any click bait web site you can find. I Googled the same questions and found similar answers.
Some personal questions it refused to answer or give suggestions because "only you can decide" was the response.
I'm very suspicious of all this. I'm thinking the only real benefit--some may say--is the chat aspect. You can carry on a conversation rather than having to determine a proper Google search over and over again.
Which makes me think there is some hype involved.
> Which makes me think there is some hype involved.
There's _a lot_ of hype.
It's a good tool if you forget about any "intelligence" and just think of it as a tool. We don't expect search engines to be 'intelligent', we expect them to return results we are interested in, in an efficient manner.
In the case of this and similar tools, it is a text generator. It will generate _something_ based on your input. In many cases, it will make up stuff because it has to generate something. Note how it will not ask follow-up questions to 'understand' you (because it doesn't understand anything). You have to apply judgment and ask the follow up questions yourself.
One thing I found these tools to be useful is to mitigate the "blinking cursor on an empty document" paralysis. I just asked the "Pulumi AI" to generate some code that, while not really correct to my needs, is a good starting point for modification.
If you ask it to generate something from you, other than just a pre-made response that you could google, then it's more useful.
GPT 4 is a massive improvement in this regard.
Copilot specifically asked me to provide the capacitance value for the capacitor closest to the comment pin in response to my question about optimal resistance for a resistor. Once I provided the value (by setting it in the tool, not by typing it in the chat) and asked it to try again it thanked me for the new info and gave back a good resistance value and how it arrived at it.
You’re probably not using GPT-4, the internet is being flooded with takes about GPT-3.5’s quality like this simply because the author doesn’t know there is a meaningful difference with the new model.
GPT-4 still confidently makes up sources for wrong answers and throws subtle mistakes (the obvious mistakes aren't as big a nuisance) into output.
This isn't to say gpt-4 isn't cool or impressive or a development to watch and learn about and be excited about, but I frequently see criticism dismissed as "you must be using 3.5" while I find 4 still costs more time than it would have potentially saved.
Of course this is possible but usually criticizing a dismissal like this as being wrong comes after it has been proven wrong.
If GPT-X would just shut up if it doesn't know something it would already be 10x more useful than it is right now.
Isn't that the core issue? Its model doesn't really "know" what is real or made up.
When I go here: https://openai.com/product/gpt-4 it says "try on chatgpt plus" and sends me to the same page I log in with and that's where I've been testing it.
Be sure that you select GPT-4 from the drop down list of models. For each new chat, it reverts back to default GPT-3.5.
I don't see any dropdowns for such a thing.
I mean the more fundamental problem with the power supply is it recommending a "USB-to-Serial converter for powering the device." (See second example image.)
I'm overall bullish on AI systems but I'm not sure this space has enough detailed context to train. Much if this knowledgeis in-house/proprietary. We'll see how fast these are able to improve over time.
Why is the battery alone not appropriate for the ESP8266? It runs just fine off a lithium battery.
Does your module maybe have a built-in VRM? The datasheet said the operating voltage range was 2.5-3.6V. A LiPo battery has a nominal voltage of 3.7V and a fully-charged voltage closer to 4.2V. That's definitely out of spec without a VRM.
It doesn't, but it's very tolerant of 4.2. I think the most I've run it with was 5V.
I absolutely believe it works for you, but I wouldn't ship that to customers, and I wouldn't expect what is ostensibly an authoritative reference tool to recommend it at all.
I’m with you on not shipping out-of-spec designs. Maybe this is a good place for a reverse-polarity protection diode!
How do you deal with designs whose netlist or part list exceeds the context size limit?
It’s indeed a challenge
For many applications such as talking to pdfs you can use a vectors db like pinecone
But that doesn’t work well for schematics because for of the interesting use cases you need access to the whole thing.
Here is what’s been working for us
1. Extreme compression and summarization of the data we dynamically put in the context 2. Leverage the extreme amount of world knowledge the cutting edge models have
Do you do any fine tuning of the model for circuit analysis and other hardware design literature?
I'm curious if you fine-tuned a GPT model with specific hardware-related examples. Based on the OpenAI docs it seems like a pain to get enough good prompt:response pairs for fine-tuning (A few hundred minimum).
Senior EE here! I'm not at all impressed with this page because you don't show the tool doing anything interesting. All your examples are undergrad-course or lower level of difficulty. I feel like reading several of the tool's responses actually wasted my time because it took longer for me to read the text than to come up with a better answer. (Though admittedly I've been at this a while. And the check-if-I-screwed-up-the-chip-select one was pretty neat, I did like that!)
Can you demo something with a little more bite to it? Seeing that you can help me save time on actually-meaty pieces of design work would make a lot of difference for me. Maybe something like "design a correctly biased class-AB output stage, with rail-to-rail swing on +/-5V rails, intended for use boosting op-amp output current"? That is an interesting one because (1) it's a small but real, useful, meaty problem with (2) a lot of examples out there in the world to copy from with (3) a nontrivial added constraint in rail-to-rail out but (4) even solutions that aren't all the way there are useful.
Not a senior EE, but equally unimpressed.
The thing about LLM is that it answers based on data it has seen before. So if enough posts on stackoverflow in 2021 claimed mps340 is the best CPU for beginners, that's what the AI will recommend.
There are 3 problems with this: it can and will be gamed, it is usually more conservative and not on the bleeding edge, and finally you can't ask questions you did't know you needed to ask so feature products built by random dude using chatgpt will have reliablity and safety issues.
> The thing about LLM is that it answers based on data it has seen before.
Modern LLMs are able to perform web searches to make decisions on contemporary data. Once they have proper API support your concerns should be resolved, hopefully in a few weeks.
> reliablity and safety issues.
The solution to this is fine tuning / RHLF. OpenAI have done a pretty extensive job at getting political safety for ChatGPT with RHLF. It seems reasonable that RHLF could achieve a similar result in the hardware domain.
> you can't ask questions you did't know you needed ....
Solvable by prompt engineering. You can wrap user input in a prompt. As a toy example: "Here is user input $userInput if you have safety concerns about their project please respond with questions you think the user forgot to ask". Might also be possible to tweak with fine tuning/RHLF.
"RHLF" is Reinforcement Learning from Human Feedback? (Strange acronym.)
I don't see how that helps a a tool become useful to a very skilled person, if the "human" side of things is polluted by 95+% of users having very low skill. It's great that you can train LLMs on the world's best reference material! But I don't see how you can get the world's best updates into that training set without hiring the world's best experts. So the tool will have very little value for anyone above a certain skill ceiling. Search has already fallen victim to this effect (I'm tired of results pages full of beginner material when I have a deeper question!) and I cannot see this being better for augmentation training sets for LLMs.
It should be RLHF, my bad with the spelling.
> So the tool will have very little value for anyone above a certain skill ceiling.
LLMs aren't great for doing tasks you don't know how to do, because you'll eventually have to debug the output. However they excel at performing time consuming tasks that you could do if you really wanted.
That's why I think they'll actually be more useful for experts.
> I'm not at all impressed with this page because you don't show the tool doing anything interesting.
a) has there been anything like this before?
b) give it a year and see what it does
I've been playing with this for a while. In my experience, current LLMs work really well for looking up general electronics knowledge and reinterpreting it in the context of your problem. "What are the reasons my resistor-based voltage divider isn't working", that kind of stuff. Not useful for seasoned EEs, great for hobbyists.
But the moment you're asking LLMs to reason about the specs or the applications of specific chips, they will give made-up answers around 80% of the time. This is probably a matter of the data being a bit too sparse. Pick a chip and ask about its supply voltage range, and it will probably get it wrong, even for the most popular stuff.
> the moment you're asking LLMs to reason about the specs or the applications of specific chips, they will give made-up answers around 80% of the time. This is probably a matter of the data being a bit too sparse. Pick a chip and ask about its supply voltage range, and it will probably get it wrong
But isn't your supply voltage example exactly the sort of stuff they should get right? It's just regurgitating data sheets.
I kind of expect a tool like this to be able to operate in one of two, theoretically distinct (but maybe not practically distinct) modes. One is basically "blueprints", very much in line with the example I suggested earlier: draw me up one of these that fits in here. I wouldn't expect creativity, just more or less rote execution of a common plan. The other mode let's call "tuning": continuing with the class-AB stage example, this would be things like setting the biasing resistors correctly. That's a tedious task, possibly requiring simulation to do well, possibly just copyable, but with a large margin for usable results. I may not care exactly what I get as long as it's somewhat workable, which would be a good place for machine help. Or I might just want a decent starting point for my own simulation. I think "AI" techniques could handle either of those general modes and produce useful (if imperfect) results that save overall time.
What I don't expect AI to do is design architectures for me, or pick approaches. A tremendous amount of my value-add as an engineer is just me saying "no, don't choose approach A, I know it looks nearly equivalent to B on paper, but B is going to work out better". AI is not there now and I don't see LLM-style AI getting there any time soon, just by its nature. (At least not when it's a genuine judgement question and not an education/"A is never right" situation.)
What I don't think AI can do but I really, really wish it could is help me pick parts. All it needs to do for this is read and halfassedly-understand a bunch of datasheets, more datasheets than I can read myself. I think LLMs can do that! Though they might need to also be able to read graphs or simple diagrams ("Find me an EEPROM in the 208-mil SOIC package" is a great prompt and seems thoroughly actionable! But sometimes that dimension is only provided as part of the package drawing, which might be an issue.)
Recently I needed a PMOS FET with particular specs and was getting frustrated not finding quite what I wanted. So I asked ChatGPT. It was great at making me laugh and giving me a little break from datasheets! It did not actually help find a part. (It kept suggesting NMOS parts when I needed and asked for PMOS.)
So, kinda the same level for software development. But people keep repeating that I will be unemployed within 1 year.
Founder here:
Incredibly stoked to announce this today!
This is a dream a long time in the making. When we first thought of rebuilding the hardware design tooling stack from scratch, AI driven design seemed like the dream at the end of the tunnel...only 3+ years later we are here
Given the open ended nature of AI Expert agents we are launching a community preview. We are committed to keep building http://flux.ai in the open and are excited to discover this new realm with all of you.
Hit me with questions
If this works well, are you interested in training a model specifically around the domain you work in so you could get deeper integration with your product?
I have a personal project similar in concept to flux, but rather than working at the circuit level, it's at the device, firmware, and infrastructure level. I've been trying to use GPT to intuit how to network devices using different protocols, such that it might be easier for lab techs (for example) to discover how to manage and maintain their experiment tooling and infrastructure. It's really hard, though. The outputs and level of integration are mind-bendingly good, but... It's not perfect, which makes it too clumsy or even dangerous for what I want to build.
I think I envision something like GPT transforming human language into prompts for a narrower model with more predictable outputs, which is far more aware and capable with the application's logic and data. The user experience could then give you the conversational ease, yet far better application integration as well.
Anyway, just curious what you're thinking and very excited about projects like this. This is really cool.
Yea def looking into this
We already use a multi model approach and mix of non deterministic and deterministic tooling in the pipeline to optimize for a given task
Lots of more work to be done here
The flux docs indicate that you're using a custom-trained LLM and in another comment it was mentioned you're using LangChain (to integrate non-deterministic tooling perhaps). Just curious, but are you using some combination of your own model, GPT3/4, and calculators / datasheet readers / etc?
The answer is: all of it!
Not everything is in prod yet but we def combine all the things you can imagine to pull this off
Have you reevaluated the decision not to implement much by way of basic interface features like symbol and footprint editors for such a long time? It's a real barrier to even playing around with the software. I want to see you succeed, and my experience in the EDA industry and as an electrical engineer is making me wonder about that approach.
It makes sense if you're just shopping it around to the EDA incumbents since they'd have you redesign those parts anyway, but the fact that you currently offer a paid subscription to users makes me think that either you've underestimated or I've overestimated the importance of having those features by now. I guess that could also make sense if you want a more manageable user base for now.
Not sure I follow…maybe there is a misunderstanding here
We do have symbol and footprint creation and editing capabilities…in fact the vast majority of parts on the platform have been created by users.
Now what we don’t do is offer a modal editor like legacy tools do…instead it happens all in the same mode, which simplifies a lot of the UX and also provides powerful new abilities such using a schematic to create simulation models.
Now that said I think we can do a lot more here to further improve the experience and we def will but you are right in that it hasn’t been much of an issue which is because you have access to a vast preexisting library in the platform
The last time I tried to evaluate it was a long time ago and it seemed to be all svg uploads. I had to re-read the docs more closely after reading your comment. It says you need to enter svg only for non-rectangular symbols, but it's not clear if you can create or modify a pad shape other than circular or rectangular from within the tool. Obviously good progress, and more than I understood when I wrote my previous comment. That's minimum viable for many to start using it, so my concerns are addressed.
The single mode thing is cool.
Thanks for giving us another shot
The circular and rectangular options have a bunch of transformations available…you can make turn circles into oblong shapes simple by providing async size value and you can set a corner radius for rectangular pads and even your layout.
And as you probably saw you can load symbols as svg and you can also load custom pad, layout and silk shapes either as svg or dxf
Less known but also works for shapes that you are reading off datasheets is that you can also just type in a svg path into the shape field of each object type
How philosophy is to make the most common things really simple and straightforward and the edge cases possible. But yeah, someday we’ll also ship online shape editing features
In that case, the docs don't really do justice to it. They still read as if many features don't exist yet.
GPT is really bad at math. Doesn't that mean it's also bad at doing the math for designing circuits?
Maybe not. It understands data sheets, and seems to be able to connect the dots based on probabilities around connecting components together. If you don't ask it to do large-scale design tasks, it's okay at narrowly scoped tasks like this.
If it's wrong, you can determine that during simulation. You could also assert that in the background, I suppose — prune the bunk suggestions by the language model before presenting them.
In my own experiments I find GPT can succeed in complex tasks if you prompt it multiple times at different steps of the task, and have some level of recursive validation and re-prompting to keep it in line. In a case like this I have a feeling it would do a great job, and would be especially useful for the discovery/exploration phase of a new project.
Here's another interesting idea: the marketing blurb mentions reducing costly errors—but with the amount ChatGPT hallucinates, how does this work? What steps have been taken to mitigate hallucination?
LLMs do indeed hallucinate.
What’s been working well for us is to give it access to factual data sources such as calculators, component data sheets, etc.
It does really well from our tests in the past few weeks, so well that we decided to invite everyone to explore the boundaries with a community preview
Just give it a shot and report back
Can you elaborate on "really well"? Because a few weeks of testing and "really well" versus the claim of reducing costly mistakes on the marketing blurb whilst simultaneously acknowledging hallucination doesn't inspire confidence.
I'd be interested to know a lot more about testing methodology too. A hallucination could be a much more expensive problem than someone simply making a basic mistake that someone might otherwise catch if they were doing things the usual way.
Not trying to be obstinate, genuinely interested, but sceptical as my experience of ChatGPT has been very negative due to hallucinations, to the point that I find that the marketing materials sound far too good to be true versus the comment here, which seems much more grounded.
To give you an example of the typical response quality we encountered:
@copilot can you design a low pass filter for me with 3db point of 300khz?
copilot: Sure, I can help you design a low-pass filter with a 3dB point of 300 kHz. A simple passive RC (resistor-capacitor) low-pass filter can be used for this purpose. To calculate the values of the resistor (R) and capacitor (C), you can use the following formula:
f_c = 1 / (2 pi R * C)
Where f_c is the 3dB cutoff frequency (300 kHz), R is the resistor value, and C is the capacitor value. Let's choose a standard resistor value, for example, 1 kΩ. Then, you can calculate the capacitor value as follows:
C = 1 / (2 pi R f_c) = 1 / (2 pi 1000 300000) ≈ 530.5 pF
Since 530.5 pF is not a standard capacitor value, you can use the closest standard value, which is 560 pF. With these values, the actual 3dB cutoff frequency will be approximately:
f_c = 1 / (2 pi 1000 * 560e-12) ≈ 283.5 kHz
The low-pass filter can be assembled by connecting the 1 kΩ resistor in series with the input signal, followed by the 560 pF capacitor connected to the ground. The filtered output can be taken across the capacitor. This design will give you a first-order RC low-pass filter with an actual 3dB point of approximately 283.5 kHz.
And how about the negative responses involving hallucination? Ideally giving an example of the worst hallucination encountered thus far and least worst thus far.
As a potential customer, I'm more interested in knowing where the potential known failure points or points of friction are than knowing what's working well.
Yes, when it works, it's certainly compelling in its responses—but as it never knows when it is wrong, it is equally compelling at a cursory glance when it hallucinates.
For most folks interested in this, their Dunning-Kruger complex will prevent them from picking up on when it's wrong because it's so convincing and sure of itself.
AI is useful for making suggestions, but you shouldn't blindly trust it like that no matter how advanced it gets. It helps you work faster, not beyond your abilities.
That's the thing, I'm not actually sure it does help someone work faster if everything it suggests needs to be fact-checked whilst under the guise of being always confident and never knowing it is wrong.
A lot of people liken AI to having a junior role working with you, but most junior positions have been to university and have a pretty solid understanding of a field in the first place. I wouldn't have a junior role hallucinating things that don't exist on a regular basis.
The speed of generating an output is only as meaningful as how consistently correct and appropriate that output is in terms of its value. I remain unconvinced.
If you don't do your job because you blindly accepted advice from a confident AI, a confident senior engineer, or a confident application note, then you might not have that job for much longer.
I don't know if it will make you personally work faster or not, that's your business to sort out.
Please def report back if you encounter it to hallucinate
This is exactly why we are doing the community preview…to make it your favorite design tool.
Have you tried recently? I am getting surprisingly good results lately with GPT-4 on basic physics problems with no prompt engineering (just stating the problem). It seems to recognize when it needs to "think step by step" and just do it.
I hope it would not lol
For starters: From our testing it’s really extremely good at providing the right formulas to solve the problem at hand.
But you are right, these models are inheritely non deterministic and therefore can struggle with things like math. The solution is to provide it access to tools such as a calculator which works really well.
when I'm building circuits, usually I know what I want to design on a high level, but once it comes time to turn the schematic into a BOM, I get bogged down in minute differences between similar components
e.g. most recently, I needed a boost converter with at least 300mA sustained I at 3.3V coming from the lowest V source possible (alkaline battery powered gadget) After an hour or so of searching I had to choose between TLV61220 and TPS61021A . Maybe there is a better third chip? At this point I ran out of time to research and just figured this the TPS61021A would do.
Digikey/Mouser/etc have parametric filtering (when it works), but optimization among three variables is not the strong suit there, especially when you have, say, an 8mm square to fill and cost is a factor (and availability/flexibility between footprints)
I would love to use this as a fair recommendation engine (that isn't sponsored by TI, skewing all results towards their products) where I can give it my circuit's minimum/recommended/maximum parameters and have it suggest applicable parts, or even take my finished designs and suggest cheaper alternatives for parts that match the design's footprints but have lower capabilities, if that's all my circuit needed in the first place.
(just threw my query into chatgpt and it suggested https://www.ti.com/product/TPS61221) not bad.
I would love to hear/read about your journey. Especially what your thoughts were at the start. There wasn't much like ChatGPT when you started 3 years ago, wasn't that fog scary af? How many times did you pivot/change your AI models?
We knew that we had to build a foundation first to have any shot at getting AI right.
I worked at Meta for years and knew first hand that ML/AI can only ever get good with loooots of training data.
So we needed a way to produce lots of high quality training data first and a way for users to be able to reason about results and edit them.
There is a graveyard of failed AI hardware design projects and one thing they all have in common is that they are black boxes…its very obviously the wrong approach. But every week a new one pops up…I don’t even understand why anyone funds this stuff anymore frankly…it can’t work.
So we embarked on building our own eCAD editor first…that was gonna be step number one. It was also the hardest and most expensive way to go about it… but if you have big dreams then you just have to go and do big things. I recent these get rich quick schemes personally anyways…I love doing hard things!
LLMs just starting to get good around the time we had the editor ready for public launch (after years of beta) was a happy coincidence that makes us look like geniuses now. But I take it lol.
We had been actively playing on the side for the past 6 months on ideas for first AI features as we geared up for launch of the core editor but the critical insight to use our existing projects comments surface was a last minute brain fart that came to us about 6 weeks ago.
Things went pretty quick from there and we are moving extremely fast now that we have the right foundation.
Excited times!
> rebuilding the hardware design tooling stack from scratch
This kind of talk is tacky and repels technical users.
Opened site. Chat popped up in the corner and started making sound effects. Closed site.
I guess it doesn't matter if a few people are annoyed, if the end result of these things is a new increase in conversions. But I wish "don't annoy people" was actually in the product requirements.
> Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Just flag the comment
Most "annoyance blocker" filterlists get these. In this case it's intercom.io which I block with Adguard's Annoyances lists: https://adguard.com/kb/general/ad-filtering/adguard-filters/
I can't imagine using the internet without uBlock Origin, SponsorBlock, and a whole host (no pun intended) of filter lists.
Ah the trusted HN top comment that’s never about the product but a complaint about a minor marketing annoyance on the website.
Still wondering about your friends' MegaCryptoPolis investments
What are you talking about?
this obviously https://news.ycombinator.com/item?id=26899961
damn man, that was 2 years ago. I'm not sure what happened to the buyers but looking at MCP's twitter, they're still shipping product and seem to have a player base.
you don't personally know those people anymore?
I do, but I assume they’ve moved onto other investments. 2 years is a long time to keep holding edge case investments
uBlock Origin on Firefox hid that annoyance for me.
I recently used ChatGPT to find a part. Maybe this was an anomalous result, but it was wrong. Don't know if this will be an issue with your project.
Me: Is there a digital to analog converter with an 8V analog range and serial input?
CGPT: Yes, there are several digital-to-analog converters (DACs) available with an 8V analog range and serial input. One example is the AD5541A DAC from Analog Devices. It has a 16-bit resolution and can accept serial input in the form of SPI or QSPI signals. The output voltage range is programmable and can be set to 0 to 8V using an external reference voltage.
Another example is the MAX5389 DAC from Maxim Integrated. It has an 8-bit resolution and can accept serial input in the form of I2C signals. The output voltage range is also programmable and can be set to 0 to 8V using an external reference voltage.
These are just a couple of examples, but there are many other DACs available from different manufacturers that meet your specifications.
Me: Those parts are limited to 6v max
CGPT: I apologize for the mistake in my previous response. You are correct that the AD5541A and MAX5389 DACs have a maximum output voltage range of 5V, and they cannot meet your requirement of an 8V analog range.
However, there are other DACs that can meet your specifications. For example, the DAC5578 from Texas Instruments is a 16-bit DAC that can operate from a single 5V supply and has a programmable output range up to 10V. It also supports a serial interface with SPI or MICROWIRE protocols.
Another option is the MCP49xx series from Microchip, which includes several DACs with output voltage ranges up to 10V and a serial interface with SPI or QSPI protocols.
I hope this information is helpful, and please let me know if you have any further questions.
I've had this experience as well. I find the trick is the provide as much information as possible and keep the expected solution's scope to a minimum. That flux is including context about the project a user's working on will probably help more and more as the project is built out, which is cool. The second part though, keeping query scopes narrow, is up to the user I guess.
It would be interesting if circuit simulation could be used to vet out suggestions, but you'd need a vast database of components and their pinouts, GPT or something else would need to know how to arrange the circuit in order to simulate them properly, and so on. That's the only way I can think to validate outputs at the moment, though.
Interesting. I tried the same question (each a new conversation, GPT4) and 10V and above it catches, but 8V seems to be in its blind spot. Maybe it hasn't learned anywhere to take absolute maximum ratings seriously?
Update: after asking it to explain absolute maximum ratings and check that for this part (which it knew correctly) and pointing out its error, it finally suggests a reasonable part, the AD5758.
It's clearly missing experience with magic smoke.
Can report similar experiences.
Just signed up for a free account, will give it a try, but expecting hallucination to be a major issue.
Same here, when asking for an N-channel complement for a P-channel power FET. It would get some of the parameters right, such as dissipation and package type, but would get others wrong, like insisting on 800V Vds parts when the original target was a 50-volt part.
Promising, but not quite ready for prime time yet.
ChatGPT talks all from its memory, they're not suitable for hard data. Bing Chat type of architecture is more suited, though it still couldn't find a part matching that criteria.
Similar experience here, the only thing I have found it to be somewhat okay at, is figuring out what would be the somewhat "jellybean" version of a component I am looking for. It is sometimes kinda difficult to figure that out through the usual parametric search. Though, even then, it has recommended me components that are very much not what I asked for, stuff as simple as recommending me an npn instead of a pnp.
This report is unhelpful without the GPT version you are using.
ChatGPT Mar23
Thats the curent date for both 3.5 and 4, which are you using?
3.5, I believe. But it's not like they make it easy to know which one you're using.
I don't mean to sound like a real downer: but I don't see this tool "getting good" for the simple reason of: what are you training it with?
I thought ML needs huge datasets? This is great for GitHub's Copilot as there is plenty of high quality, production, open source code they've used without authorisation from the creators (and possibly proprietary code).
But this dataset just doesn't exist for hardware development on the internet. Almost everything on the net is hobbyist stuff. Great for hand building 10 on a desk but absolute rubbish for a (even horribly) manufacturable and COMPLIANT product.
Further: an anecdotal, but to me very telling, one liner from the video "the component is "not use for new designs" but we can ignore that because there's 260,000 of them". I have personally sniped larger quantities than that from digikey. "Not for new design" means you don't use it in a new design. PERIOD. And ironically, for something like a Murata cap, they've probably literally changed one letter in the PN because they've made a tiny process or recepie change and its more a "move everyone to the new iteration" process.
I do sincerely wish them success and maybe they can carved out a small bit of the hobbyist market but that's probably it.
That's generally true for ChatGPT generating code as well. Nevertheless, I've been able to ask it to solve some semi complicated coding problems, along with unit tests. The code that it generated was structurally better than a lot of junior to mid level engineers.
It also made some similar mistakes to what you describe, with silly things like the wrong parameter names. It still saved a lot of typing compared to doing it all manually. It didn't take long to fix it either, it just required fairly decent software engineering skills.
How is this relevant to circuit design? Well, in the same way it could potentially be an effort amplifier for people who already thoroughly know what they are doing.
AI is on the verge of becoming a really powerful lever.
I think you should give it a shot and see for yourself
This is a first step today, it’s neither done nor perfect…but it kept blowing our and our our most senior testers minds and so we decided to not withhold this incredible technology and rather invite to open experimentation and learning.
The timing could not be more perfect. Yesterday I realized that I need to create my very first PCB to repair and smartify a device that I broke. Today I thought I should use ChatGPT to better understand how things work. And now this.
There are a lot of space constraints so first thing is figuring out if it is even possible to build as small as needed. Usually I would probably be better off starting to build big first and then scale down. But I feel confident that I can already get quite far with the amazing resources out there.
I spent hours watching videos about soldering, PCB design, some of the components I think I would need (very flat flexible cables, small ESP32 SMD, no space for USB) and then how to start actually doing the work. My last exposure to electronics was in grade 8 and that was very heavy on cables.
This 3h video here was very helpful to understand the process. It's an overview of building your own ESP32 dev board with EasyEDA from start to finish: https://youtu.be/S_p0YV-JlfU
I like these top-down big picture kind of things that go over everything in just enough detail to know what to search for. Does something like this but with flux.ai exist for complete beginners? The tutorials seem more focused on users switching over from other PCB design tools.
Every step of the way I am just amazed how today I can get physical things custom made and shipped within days.
And then there's fun videos like this here: Small scale fully automated PCB production in what is basically a garage -> https://www.youtube.com/watch?v=ZH6ljHUx_IM&t=800s
That last video... drool.
As someone who is not well versed in the current LLM paradigm, how are the LLMs for technical domains trained? I can imagine how a model based off interent text would make for a _general_ conversational partner, but I'm unsure how a model would produce accurate information from such a niche prompt.
Also curious: has anyone created an LLM for circuit schematics trained on HDL code?
Very cool application. It's nice to be reminded that people are actually trying to make useful products in a world where it seems everyone is trying to sell you a solution with no problem.
For starters you can feed it every book on electronics and every datasheet out there
That plus broad world knowledge and reasoning abilities gets you very far
And then you dynamically provide proper context and prompting as well as deterministic tooling such as a calculator to and voila…you got your new favorite virtual team mate :)
Why on earth would you call this "copilot" when GitHub has already long established that as a name in this space? :I
I believe aviators have been using it longer.
Indeed, but under trademark law, the context-of-use matters, and is what allows generic terms from one domain – like 'copilot', 'apple', 'slack', 'y combinator' – to become defensible trademarks in another commercial-offerings domain.
From this landing page's headline & 1st paragraph, I was still thinking they were talking about the broader-market, better-known Github Copilot – also a for-pay LLM-driven coding assistant! – in some way. In the 2nd paragraph, I saw they were just reusing the name of someone else's LLM coding assistant for something different that they independently trained.
Note also the current submission headline here is "ChatGPT for Hardware Design", not the page's "AI For Hardware Design with Copilot".
Did the submitter not use the usually-requested original page title, because they knew `Copilot` would be misunderstood? (They created another misunderstanding via their change: ChatGPT is only an analogy for what Flux.ai's product is doing, rather than a contributing part of the system.)
If they aren't using ChatGPT in Flux.ai, that's textbook trademark misuse!
Check out my new product -- It's Windows for Raspberry Pi! (Actually a Ubuntu port)
Their page doesn't mention 'ChatGPT' - that analogy was added by the submitter. But, Flux.ai has named their LLM-based coding assistant 'Copilot'.
Github can't monopolize the word "copilot". It's a basic english word to describe someone that is assisting you to steer a vessel/team/project. It's been around for hundreds of years in naval industry, and in aviation since the dawn of flight.
Oh they can if they get a trademark. It's a navigational term, but not a computing term. Inventing a new meaning for a word standard use of trademark.
Was wondering the same thing - just seems a bit daft and is going to really confuse people.
Yeah, this seems like a lawsuit waiting to happen. Rename it to 'Solder' or something before that happens.
GitHub doesn’t own the copilot trademark. I would actually like to have these assistants called “copilots” instead of “coding LLM”. Since these are going to be a part of our lives let’s use a useful and memorable name.
Simply via commercial use, Github could choose to assert a common-law trademark on 'Copilot' within their field-of-use. But also, they've registered "Github Copilot" at the USPTO:
Presumedly they would likewise register "Flux Copilot"?
Do you think I could offer a custom-built Windows PC under the registered trademark "Gordon's Macintosh"?
Except that "copilot" doesn't appear to be registered by itself in this field, only in combinations. But we'll see.
This is absolutely been a long-wait implementation of GPT.
What is the process of adding component to the list? Additional, could rule to been user defined (to minimum EMI/EMC, e.g.)?
Yeah you can create your own components or work with the over 250k we have in the public library.
Flux is designed much like GitHub in this regard…you can fork other people projects and what not.
You can already ask it today to perform EMI/EMC checks for you…and we have started thinking about using it to write static rules for you among other things
What I want most is simulation models of commercial components, and a good simulator...
Oh and something that can read datasheets for me, and extract important parameters (or convert them to spice models).
A tool that can hunt for cheap components would also be nice.
Apart from that, I don't think I like to have any fuzziness in my workflow.
Yes, us too!
We are working on all of this right now and have a bunch more announcements over the coming weeks and months.
Stay tuned!
Nice stuff. But hard to find good application in my environment for it. I mostly re-work old designs or use reference designs with particular chips (that the company has in stock at the moment). Maybe somebody could greatly benefit without the limitations I have at work.
This phones home, so I cannot use this as per my company's regulations.
Github auth seems broken for me on chrome mac, anyone else ? Here's the error on the login page : You are already signed in, please try signing out and signing in again, or contact us for help. Sometimes I get sent to a profile page, but trying to create a project sends me back to the login page.
Thanks for the report. I’ll look into it tomorrow.
Same with login with Google
We are witnessing the arrival of the technological singularity in slow motion.
It is becoming possible for AI systems to rapidly design hardware and software systems from the requirements of individuals who do not necessarily understand the subject matter.
I give tech 10 years before most of it is accomplished by AI and most people in tech will be forced out. Like DC elevators, some will remain, but not like it was.
i give AI at least another 500 years until the I in AI actually stands for intelligence.
See what I did there? Everyone can make outrageous claims without backing them up whatsoever :)
Nope. You added nothing of value. You can be a fool and deny numerous forms of narrow AI while AGI is a flying car distraction that doesn't matter. It is your error conflating the two.
Yes. My claim is not TRULY outrageous /s
Ah, yes, "confident" and "wrong" is exactly the combination I need when designing with lethal voltages. /s
Flux is cool, but I don't know who the customer is. I feel like, as a hobbyist maker, I might be it and just don't know it?
Our Mission is to take the "hard" out of hardware so anyone, from makers to businesses, can easily turn ideas into products.
So def give it a shot!
Excellent product! I’m curious how you are communicating data from the project and handling memory to ChatGPT, have you found it handles new external data well without first modifying its format or structure or have you had any memory issues?
I wanted to give it a try, so I went to sign-up with Github but the process failed. So then I signed-out of Github and tried to login again but Flux loaded someone else's profile! Had to de-authorize Flux from my Github.
Hi,
I'm on call for the team this week.
We'd love to get more information here as we take anything around privacy and authentication very seriously. I'd like to get usernames of profiles involved and rule out being redirected back to a page you were visiting before signing in.
Would you be open to discussing further? You can reach me at robert.herbst@flux.ai. Being able to discuss with you directly would be really helpful.
Alternatively we actively monitor and respond to bug reports filed here: https://feedback.flux.ai/bugreports
neodypsis reached out and we got to the bottom of this.
When a user visits a Flux project without being signed in we create a temporary anonymous user record for them so that they can interact with the document. These user records are populated with human-like names (which is partially to blame for the confusion here. We'll fix that.)
When a user signs up their anonymous record is updated with the details they provide but the record id remains the same so we can keep the user's interaction history.
In this case when the Github sign up failed the process stopped half way. The anonymous account was never updated with the username neodypsis provided but appeared to be someone else's account (a human name followed by several numbers).
We've verified that the account was indeed linked to neodypsis' Github profile and no-one else had ever had access to it.
We've started some work to improve the OAuth sign up process.
This was a good talk on routing https://theamphour.com/626-intelligent-routing-with-sergiy-n...
Is there anything like this for ASIC design / Verilog?
RapidGPT for FPGAs maybe. https://rapidsilicon.com/rapidgpt/
It's AI said it added components to the schematic but there is nothing there?
Amazing
How do I exclude my account?