Amazon CodeWhisperer, Free for Individual Use, Is Now Generally Available
aws.amazon.comI have used it and copilot both and it is a bit behind copilot.
First, Copilot supports a lot more languages (which is a big utility of such tools, that you are writing code in a different language much more quickly.)
Second, it fails more often with incorrect suggestions, and on non trivial things tends often to go line by line.
Good enough for me :
Today, we’re excited to announce the general availability of Amazon CodeWhisperer for Python, Java, JavaScript, TypeScript, and C#—plus ten new languages, including Go, Kotlin, Rust, PHP, and SQL. CodeWhisperer can be accessed from IDEs such as VS Code, IntelliJ IDEA, AWS Cloud9, and many more via the AWS Toolkit IDE extensions.
I was impressed that Copilot even works well on languages not advertised as supported.
Is copilot not gpt3.5 now (before I think it was gpt3)? It performs well even on languages it doesn’t even know. I write weird languages for fun and in VScode it quickly starts giving elaborate slabs of code for languages it only knows from my personal projects. I don’t use regular syntax generally (we have enough of those); my last abomination was a Unicode char apl like and it picked it up pretty fast.
They announced Copilot X as using GPT-4 but its unclear to me whether it's just the new features (the chat, etc.) or the code completion that will move to GPT-4 as well.
If it isn't, then Copilot has previously used OpenAI Codex (which is based on GPT-3)
Source: https://github.blog/2023-03-22-github-copilot-x-the-ai-power...
It goes beyond languages. Yesterday I was editing a markdown file that was to become a post on a static site with movies. After I entered the title of a Taiwanese movie from the eighties, I jumped to the frontmatter to add some metadata about it. It autocompleted the director even before I typed the first letter of his name.
It was so unexpected for me that I had to pause for a second to process what happened.
LLM's are Autocomplete all the things
We know each other so well we finish each other’s sandwiches.
I know someone who uses it with SAS, which isn't well known outside the analytics space. (Fun fact, SAS started out on punch cards.)
Just tried CodeWhisperer out. The line by line output is driving me crazy.
Definitely disappointing compared to ChatGPT based code creation. I love describing what I want very briefly and getting a nice block of code to start tweaking.
I wish there was an easy way to benchmark these tools and revisit them when they pass a threshold of competence.
> Just tried CodeWhisperer out. The line by line output is driving me crazy.
It's not necessarily the case, it can generate whole functions and even multiple functions.
Today I made a class called "DynamoUtils" and it suggested 2 full methods.
Third, Copilot supports vim.
This is actually the best selling point for me. With Sourcegraph joining the party, not supporting Vim is definitely minus points. And rightfully so.
FWIW we are actively working on support for Neovim.
Why even compare them when one is free and the other is not?
Because some of us are doing this for a living and happy to pay $10/month if it is better than the free option.
This will impact copilot pricing. I will be keeping my copilot subscription (since it is pretty cheap), but I hope and expect there to be some good competition here.
With copilot being embedded in all office software in the near future, MS may as well make GH copilot free. Interesting times!
I'm all for them keeping the pricing as-is and competing on quality.
Copilot for business and CodeWhisperer Professional are the same price ($20/user/month).
While CodeWhisperer offers a free tier which may help individuals or pressure Copilot to lower personal account priced, AWS hasn't priced this very competitively for enterprise while their tool is still performing worse.
I wouldn’t take “performing worse” as a given without in-depth testing. I could see enterprises making calls like that based on things like the security checks or attribution for OSS over support for more languages.
I'm only speaking from personal experience and mostly with Typescript for both but CodeWhisperer feels like it has more lag before offering a completion and the completions it offers are often worse. It duplicates lines that already exist, it adds inappropriate indentions, where Copilot very helpfully finishes your line and stops, CodeWhisperer tries to add three more lines entirely and you reject it far more often. It's not worthless in this current stage, but it's not something to strongly praise either.
The security checks and OSS attributions feel very much like what "enterprise" software does when they know they can't compete, they tick boxes instead.
As a hobbyist programmer trying to learn, but with limited free time to code, I really want a copilot free tier for personal use. I really hope this happens.
Contribute to open source. They give free licenses to contributes of large enough GitHub projects.
Office costs money though, vs code doesn't.
Did they announce Office Copilot pricing already?
I assumed it would be included in office 365 but I don't think they explicitly said it.
I would guess that after a few years if everyone starts depending on this technology they would increase the price.
the next leap in developer productivity won’t arrive from reducing the drudgery of typing. that wasn’t ever the problem, nor source of bugs or damper on productivity. so long as we still evaluate code in our heads (or pay the expensive compile-run-observe cost) we’re generations behind what architecture has achieved with computers. so long as i can’t evaluate the consequences of any line of code as i write it, dumping lines in industrial quantities is searching under the streetlight.
Reminds me of other tools from the past that "allow development with out a developer". With fancy drag and drop UIs that look like circuit diagrams. In trying one you very quickly find out that you still need a developer. Because the easy happy path that the biz analyst thinks of is like 1-10% of what we do. 90% of our job is the plethora of edge cases and sad paths. The retrying. The when not to retry case within the retry case.
Does these language model bots help a little? Sure! But my worry of being replaced is currently sitting at like a 3% out of 100%. I expect to still have a job right up until we have AGIs, and quite probably for long after, as not everyone will be able to afford them. That is assuming we have any meaningful control over them.
I think this is more of a developer productivity tool than a replacement. Like an IDE.
We are still going to evaluate code in our heads. If anything, we need to enhance our capacity to do so. That is where AI can assist. A simple overview or graphical representation of a codebase will be a good start.
UML is dead! Long live UML!
The future of coding is LLMs coding, debugging, and refactoring in a loop. Our job will be mostly design, and intervening occasionally.
i’d rather be the detective of my own murder than another’s. i already struggle to figure out people’s thoughts from the strictures of whatever programming language they express it in. programming languages aren’t symbols of thought, neither do they enjoy the ubiquity of, for example, math notations, unfortunately.
Sorta like how in the 90s UML & all the new modeling tools were going to replace the need for actual coding!
New programmers who did not have chance to learn "properly" might not be able of "intervening".
That's probably what every generation of "older" programmers said - new programmers who did not do assembly will never be able to code etc.
> new programmers who did not do assembly will never be able to code etc
I see the results from programmers who never learned assembly.
Just like repairing and building cars from the ground up makes me a better driver. For example, the clutch in my car lasts a lot longer than it does for other drivers.
Then again, there is no clutch in an EV
It doesn’t have it because it doesn’t need it, not because it somehow built abstraction around it. That’s not the case with modern computers, anything is built on top of everything.
I'm sure batteries and electric motors have their own quirks. For example, battery performance is dependent on temperature.
You are arguing against your point. Everything has its own quirks and no one has it easy or tough.
1) false equivalence.
2) I am an old fart and had never said anything even remotely approaching "who did not do assembly will never be able to code". One can learn programming using any language. However some experience with the languages of multiple level definitely does not hurt.
In order to evaluate generated code you have to know how to program.
If you look at the mountains of crap that have been written in JS, for instance, those old programmers have been proven right.
Having no knowledge about assembly doesn’t stop me intervening with my JavaScript apps :)
Intervening can be done via LLM in some way I reckon
That’s not really a fair comparison here — A more fitting metaphor would be whether or not you could intervene on assembly code not knowing assembly.
Who said you need to know assembly.
>"Intervening can be done via LLM in some way I reckon"
This inspires much confidence
Thank you, you're the first person seeming to understand this in this whole darn section. Why does a human need a model of the program in their head when the LLM will be able to fire off 1500 test/debug/fix loops and wait for them to all resolve, then vote on the best answer?
For years we've been saying "computer time is cheaper than developer time."
Well, that's about to come back to bite us, in a big way.
Test/debug/fix against what? Writing code is fundamentally not about tests or debugging, it's about formalising informal requirements into hard constraints. To do that, you need to understand business, human, and technical contexts. Without that understanding, you get over engineering, Google Cloud Console, and poor code lacking mechanical sympathy respectively. What's more, often those contexts have competing/incompatible requirements that you need to either evaluate using your idea of your team's values, or escalate, being aware of your own limitations.
None of those problems are amenable to modern LLMs. The moment you try to be formal enough to be unambiguous, you start writing code.
Yes. Like it or not, the days of “typing code” is coming to an end. In fact, may be cheaper for companies to just nuke all existing software and start over
Speaking as someone who knows architects working with computers, architecture hasn’t achieved as much with computers as you think.
architecture has a long history, of course. even if we start from since drafting became standard practice, recent advances like 3d modeling (due to computers) and analyses (satisfying/resolving building code constraints, structural forces, etc) are quantum leaps. in programming, perhaps the type systems (static) and debugger (post-production) gives a similar feeling but not quite. these two aside, the concerns of a c programmer today largely overlaps with those of dennis ritchie & brian kernighan.
It goes beyond computing. The way that humans simplify systems is to treat components as black boxes, even though lots of internal details are actually coupled together. When you see a codebase that does this, you think "ewww separation of concerns".
But the real world violates this all the time. You want to buy a car. Some company you've never heard of in China makes the chips that detect whether or not your windshield wiper fluid reservoir has fluid. A shipment to the car manufacturer is ready to go out. But, there are no shipping containers. Until the windshield wiper sensor chips arrive, the car factory can't make any cars, and don't have room to unpack the shipping containers with unneeded parts that are piled up outside. So there is no container that can go back to China to bring the chips to the factory. While all that is worked out, SV venture capitalists print some money to give to a used car startup, making it super easy to get the best price on your used car. With no new cars available and flashy discounts to get the market kickstarted, the used car market shoots up, meaning that even though you want a new $60,000 electric car, all you can do is buy a used 1988 Yugo for $150,000. You walk to work, even though you have the money for the car you want.
If it's software, this is what we call a pageable event and the postmortem whines about "separation of concerns". But in the real world... well, we don't have those. We LOVE thinking we do, but when shit blows up, it's clear that we don't. So are we really surprised that software works the same way? It's how the Universe works, not bad architecture. The Universe has terrible architecture. Adjust some of those constants and try again!
Are there IDEs that can display compile-run-observe results in real time? A feature where you can set the inputs for a certain function and view the result of each statement in-line as you edit the function?
it's a normal way to develop in clojure. The primitive no-IDE support needed trick is to just define variables with the names of your function inputs, as in: https://blog.michielborkent.nl/inline-def-debugging.html
and then you can just evaluate expressions within the function. The fancy way with editor support is: https://github.com/vvvvalvalval/scope-capture-nrepl
you make snapshots of the local variables at any point, and later evaluate code in the context of that snapshot. So you do some action in your program that results in that function being called, it'll save the input, you select that snapshot, and now you evaluate in the context of those function arguments as you edit and eval expressions in the function. And while clojure supports interactive development at a level beyond other mainstream languages, Smalltalk and Common Lisp have support for it on another level, for example: https://malisper.me/category/debugging-common-lisp/
There's some study where Smalltalk came out as the most productive language, I don't know whether it's more productive but that kind of interactive development where you build up your program evaluating it the whole time, without ever restarting, is a lot of fun. Why it went out of style I don't know
> you build up your program evaluating it the whole time, without ever restarting
Maybe that description is incomplete.
Maybe there's a stack of mundane activities that are needed for that style to be effective.
----
"Within each project, a set of changes you make to class descriptions is maintained. … Using a browser view of this set of changes, you can find out what you have been doing. Also, you can use the set of changes to create an external file containing descriptions of the modifications you have made to the system so that you can share your work with other users.
…
The storage of changes in the Smalltalk-80 system takes two forms: an internal form as a set of changes (actually a set of objects describing changes), and an external form as a file on which your actions are logged while you are working (in the form of executable expressions or expressions that can be filed into a system). … All the information stored in the internal change set is also written onto the changes file."
1984 Smalltalk-80 The Interactive Programming Environment page 46
https://rmod-files.lille.inria.fr/FreeBooks/TheInteractivePr...
----
"At the outset of a project involving two or more programmers: Do assign a member of the team to be the version manager. … The responsibilities of the version manager consist of collecting and cataloging code files submitted by all members of the team, periodically building a new system image incorporating all submitted code files, and releasing the image for use by the team. The version manager stores the current release and all code files for that release in a central place, allowing team members read access, and disallowing write access for anyone except the version manager." (page 500)
1984 "Smalltalk-80 The Interactive Programming Environment"
> There's some study where Smalltalk came out as the most productive language, I don't know whether it's more productive but that kind of interactive development where you build up your program evaluating it the whole time, without ever restarting, is a lot of fun.
I think it's this paper: https://www.ifpug.org/wp-content/uploads/2017/04/IYSM.-Thirt...
Which seems to be a proprietary venture without any transparency into the kinds of task that were undertaken or the software development tools chosen. For example, what were the comparable tasks undertaken with "HTML" and "Machine language"?
Presumably the differences between "GW Basic", "Basic (interpreted)", "Quick Basic", "Visual Basic" ("Excel" Visual Basic for Applications?) follow from differences between the software development tools provided with the different language implementations.
So shouldn't we expect wildly different results between Java + plain text editor and Java + IntelliJ IDEA?
But there's only some kind-of generic place-holder "Java".
Light Table, which I presume to be defunct, comes to mind. (http://lighttable.com/#features)
But if you're willing to do without the "as you edit" requirement, then what you're left with is a plain old breakpoint debugger. Certainly, there are many IDEs that have those builtin.
If you use JavaScript or Typescript, check out Wallaby and Quokka.
My experience with this type of tool is it does help with some types of very repetitive code (wordy idioms, test code, etc.) or with using unfamiliar libraries. But yeah it's limited in other contexts.
Do they train on your code that they read for context? Do they retain the code snippets you generate? Can't find any mention in regards to privacy in their copy.
It’s opt-out for the free version: https://aws.amazon.com/codewhisperer/faqs/
Hard to prove if they use it anyway
It's Amazon so assume that they do unless it's extremely clearly stated that they don't. And even then...
AWS is very stringent when it comes to user privacy.
AWS is probably very stringent on user privacy on the services that they specifically enter into HIPAA BAAs covering or which are associated with other data-security-compliance-related agreements or certifications.
Amazon as a whole has quite a history of using business data of people they are selling services to for their own purposes, and I wouldn’t put it past them with any AWS services not covered by the compliance agreements/certifications.
You certainly can’t rule out a large company making dumb moves but I have gotten the impression that they’re very hesitant to do anything which would make companies stop trusting them with private data, and the controls they have like allowing you to control the encryption policy for Code Whisperer data support that. It’d have to be worth a lot to make customers question whether it’s safe to use S3 (which uses the same mechanism).
Part of releasing AWS products to customers involves getting cleared by AppSec plus other compliance teams. The products are supposed to be ready months in advance before the AppSec and other teams start working on them.
For whatever reason I got flagged :')
Amazon and AWS have completely separate privacy/security teams and different ways of approaching it. _Every_ AWS service treats user data like radioactive material. If you're an AWS service and you're found to have a way for AWS employees to get access to customer data, that's a fast track for you and your managers to get an invite to a meeting with the CEO to explain how it happened, how you're going to fix it, and what you're going to do to make sure it never happens again. That's not an exaggeration, they take it very seriously.
Sounds like you trust what a big corporate entity with many arms whose primary purpose is to generate for shareholders
AWS is very different from Amazon in that regard.
I've used GCP and AWS for about equal time and Google was the one with several worrisome overreaches into customer's accounts moments. Meanwhile AWS actually lets you view and adjust the role and policy given to customer support on your account.
Fully aware AWS and amazon are different, but my point still stands IMO. Not sure why some people have so much trust in massive corporations who utlimately are all about making money for shareholders. For some people it's like supporting a sports team. My trust in them is one of caution and low expectations.
I said that because I worked at AWS, because I have seen the internal processes in action, and because I am aware that AWS has far more to lose by looking at customer data than they can gain.
Understand that as well, but its not always enough of a deterrent for some companies. And things change. Caution is a good thing IMO, e.g. don't lock yourself in so you're always capable of moving your solutions.
It's opt out.
"Hey Alexa, disable all microphones." "Sure." "Thanks Alexa." "You're welcome."...
The hardware mute on Amazon Alexa's is actually supposed to be a real mute, and on the Echo 5 we have, there's a hardware shutter over the camera.
Once the hardware mute is engaged, I've not had the Alexa reply.
Don't get me wrong, there's plenty of reasons to bash Amazon. I just don't count Alexa and privacy with mute among them.
Fair point, though I will say unless I look at the circuit myself and confirm the hardware mute switch is cutting an actual signal to/from the microphone, I won't fully trust it. Whether that matters is up for debate - I carry around a Google Pixel phone all day, and while I don't use any of the voice assistants, I also don't audit everything closely enough to guarantee nothing is spying on me randomly.
As for my joke in general - I guess my point, if I have to make one beyond making a joke, is that I don't really trust any company to do what they claim in all circumstances. Even if they have every intention of doing so, some bug or bad actor could compromise the intent - not saying it's identical, but look at the recent issue with Tesla camera pictures being shared. So if you're working on some mission critical or top secret code, I wouldn't trust _anything_ to be running or looking at it - not Copilot, CodeWhisperer, etc. etc.
Wireshark on your network. I know this was probably tongue in cheek, and not undeserved, but you can always prove this.
It was indeed a joke, but yeah, that's valid (and was how the Eufy issue was discovered IIRC).
Exactly this.
first thought I had was "they made it free to collect data since they don't own github or similar"
Here is someone not affiliated with Amazon doing a live demo of CodeWhisperer: https://www.youtube.com/watch?v=E0jCIPaIaiA
Just a thought that I have not fleshed out and can even see some problems with, but: will we arrive at a point or will it even be desirable to get to a point where the text in your source "code" just remains the instruction you've given to the AI? In other words, Amazon's example on the linked page is:
> /*Create a lambda function that stores the body of the SQS message into a hash key of a DynamoDB table.
Now, obviously that is not valid Java syntax and javac will fail on that, but could/would it be possible to just build an intermediate tool that'll expand this into Java (or whatever other language) so that you don't need to even see the expanded code in your editor, like the same way you don't need to see bytecode?
I get that practically, right now, that would be ill-advised since the AI may not be reliable enough and there are probably more cases than not where you need to tweak or add some logic specific to your domain, etc. But still, theoretically is that where we are heading, i.e. a world in which even what are now considered high level langs get shoved down further below and are considered internal/low level details?
> Now, obviously that is not valid Java syntax and javac will fail on that, but could/would it be possible to just build an intermediate tool that'll expand this into Java (or whatever other language) so that you don't need to even see the expanded code in your editor, like the same way you don't need to see bytecode?
One step before this (AI as a pre processor that generates source code which is then validated by tests and committed without even review) I think is possible.
Cutting edge LLM apps utilize multiple LLMs to perform validation, task decomposition, etc. it’s not a stretch that a future application can take your pseudo code / spec, maybe ask you some clarifying questions, generate a bunch of code and test cases, maybe even launch a beta stage and prompt you to validate it.
As others have mentioned, LLMs are nondeterministic and can do the wrong thing on a given run. This is in contrast to a traditional program that is either buggy or bug free. OTOH another LLM can be trained to validate, and to debug.
There’s a lot of work to do before LLM apps are considered reliable enough to do their job without intense supervision.
Yes, that's was the original promise of compilers.
COBOL was designed for normal business people use, remember?
You'll just have to program in AI understandable language, I'm sure there are going to be lots of quirks and tricks similar to the languages today.
Right, but what do you think about the point ch4s3 made below, namely that AI is non-deterministic? Isn't that a core difference? The COBOL compiler produces the same output for the same input, i.e. the compiler acts as a pure function.
There's nothing requiring non-determinism in these models, it's just there as a means to increase variety. There are obviously scenarios where this makes less sense.
Kinda, there are GPU kernels that are sped-up by being non-deterministic, so you also gain performance with non-determinism
Doesn't 0 temp give you determnistic outputs tho?
Also you need to translate/expand once (or multiple times, add tests, pick best benchmarked solution)
Where this could be useful would be in handling updates of packages and API's by itself. if you integrate only by prompt/words the AI can generate the appropiate latest lib integration that happens to work with your system or whatever
Even temp=0 isn't fully deterministic for some reason, most likely floating point non-associativity and related issues.
Is that supposed to be a good thing when you're generating code?
Accepting varied non-deterministic input, great. The same input generate different code each time, not the "feature" that you'd think it is.
> will we arrive at a point or will it even be desirable to get to a point where the text in your source "code" just remains the instruction you've given to the AI
These systems are non-deterministic by nature, so I doubt it unless something fundamentally changes. Moreover you'd have to be super specific to capture the business logic to the point that you're basically writing code in a high level dynamic language anyway.
Giving a task to a team of programmers is similarly no-deterministic by nature. Otherwise we'd not see so many posts about budgets, planning, meetings, sprints and security bugs :)
Yes, but the execution of the source is (modulo network stuff) deterministic, which is my point.
Yes but the source code is not non deterministic, which is what the GP was talking about.
Right, that makes sense. You can't very well have a system where the AI instructions produce a different underlying program each time lol.
I don’t think that’s necessarily a problem. Technically every time you update gcc your compiler might produce a different underlying program for your source code.
The bigger problem is that LLMs are slow and expensive. Even in the future after many improvements, it makes more sense to have an LLM write a program once ever, rather than write a program on every compile or every execution.
> Technically every time you update gcc your compiler might produce a different underlying program for your source code.
Individual versions are deterministic though. Two identical prompts to a LLM at the same time can give drastically different results, because the the responses are probabilistic. You can't assemble complicated systems that way and expect them to behave consistently.
The issue you’re describing is more to do with correctness and performance (two critical elements of a good compiler), not nondeterminism.
If a natural language compiler can output correct performant code, nondeterminism shouldn’t matter.
For example, take a script that randomly invokes either gcc or clang, maybe randomly sets the optimization level. Multiple invocations will output vastly differently, but we can be confident the output is correct and to some degree performant.
No my point is that nondeterminism effects correctness. A random script that invokes different compilers is a contrived example, no one would ever build a system that way it is totally undesirable. Moreover, I'm not sure how we could determine the correctness of a system generated by an LLM without auditing the output and certifying each run. Who would ever want to work that way? This just creates problems that don't need to exist.
> Who would ever want to work that way? This just creates problems that don't need to exist.
That depends on what you want.
In the first place, the problem of compiling a natural language spec to code is obviously somewhere from undefined to Turing complete (depending on formulation). But if the compiler usually outputs some application with most of what the spec required, this compiler would be intensely useful for e.g. rapid prototyping.
Then the question is whether we can make an LLM based app that compiles natural language and gets you most of the way to the prototype you were building (or even better - asks clarifying questions to help refine your spec).
This isn’t that far fetched with current technology.
I can totally see the case for prototyping, it makes some sense. I just think that by the time you are specifying something so clearly that the results are correct you may well be practically programming in a super high level dynamic language.
could/would it be possible to just build an intermediate tool that'll expand this into Java
Yes.
But... it'll expand it based on the probability of what you want looking like other things it's been trained on. If you want the obvious use case then it'll be magical. Just describe the code and it'll work. But as soon as you want anything slightly less than typical you'll need to start 'prompt engineering' to refine in greater and greater detail, possibly until you've actually put in more effort than it'd take to just write the code.
For anything that's even further outside of the training data it won't work but it might look like it does. In the short term that's going to trip a lot of people up.
The worst part will be when non-developers start to use it though. "Make me a web form that takes a name, email address, and ZIP code and saves them to Airtable" will probably work eventually ... but with no validation, no error handling, no security, no styling, no cross-browser testing... because the author didn't know to ask for those things in their prompt. AI derived apps are going to suck.
In fact you can do that, and I've already built a tool for my own use that does something very similar. It works surprisingly well, and the specs (prompts) you write get checked into VCS and act as a form of literate programming that can help explain the resulting code better than comments do.
It's funny, I was actually just pondering what to do with it when I opened HN and came across your comment. I was thinking of improving it some more and then selling it for a low-ish price. One thing that'd really help though, is a more widely accessible GPT-4 API.
You can try the decorator ai_fn in marvin https://github.com/PrefectHQ/marvin
No, because a programmer's job is to build stuff to spec, and "just do what I really mean" is the opposite of that.
The best complete prompt we have for that is the source code, so I think not.
Great to see them giving attribution and reference links to open source. I don't think co-pilot does this at all.
"To help you code responsibly, CodeWhisperer filters out code suggestions that might be considered biased or unfair, and it’s the only coding companion that can filter or flag code suggestions that may resemble particular open-source training data."
Copilot has a setting for this, and they announced attribution a while ago: https://github.blog/2022-11-01-preview-referencing-public-co...
It would be interesting if AWS actually does their attribution - how do they know which open source code was published in any public repo?
Still doesn't clearly mention the license
GitHub Copilot has the same functionality, it checks on-the-fly that the generated code isn't reproduced verbatim from training data.
Anybody having any luck with this for rust? Tried to prompt it to give me a MQTT hello world and instead it just keeps suggestion more & more commented out imports:
// Send string via mqtt
// use async_std::task;
// use async_std::prelude::;
// use async_std::net::TcpStream;
// use async_std::io::prelude::;
// use async_std::io;
// use async_std::sync::Mutex;
// use async_std::sync::Arc;
Possibly confused by having multiple ways to implement this solution? Seems like languages that have many ways to accomplish something, and being trained on that data, might have issues providing a proper solution. I'm fairly sure a bug report would help in this case (if you want to help them, you might just be experimenting; nothing wrong with a skeptical look)
I find constant copilot suggestions kind of annoying but I really enjoy keeping a GPT-4 tab open as a pair programmer. The 25 request limit per 3 hours has been fitting my workflow just fine since I mostly use it when I'm stuck on a problem just to get ideas or when I'm working on an unfamiliar language/framework.
I’m actually really interested to try this. I’ve been putting off paying for Copilot because GitHub does not make it convenient to pay for their services without a credit card and I do not have a large enough open source project to qualify for a free license.
I also haven’t used these tools at all so if CodeWhisperer is a little “dumber” than copilot, I doubt I will even notice.
prepaid visa/mc? hard to imagine life w/o copilot now that I have been using it for a few weeks.
The era of code generators is likely to user-in an era of legal work surrounding them.
I was just thinking about this before reading the announcement. Part of our work is in aerospace; hardware and software being a part of that. All of it goes through layers-upon-layers of design, testing, verification and qualification for flight.
In my mind I saw this scenario where something happens and it ends-up in the courts. And then, in the process of ripping the code apart during the lawsuit, we come to a comment that changes it all. Something like this:
// Used Amazon CodeWhisperer to generate the framework of this state machine.
// Modified as needed. See comments.
That's when the courtroom goes quiet and one side thinks "Oh, shit!".What does the jury think?
They are not experts. All they heard is you just used AI to write part of the code for this device that may have been responsible for a horrific accident. Are their minds, at that point, primed for the prosecution to grab onto that and build it up to such a level that the jury becomes convinced a guilty verdict is warranted?
Don't know.
Does this mean we have to be very careful about using these tools, even if the code works? Does this mean we have to ban the use of these tools out of concerns for legal liability?
Personal example:
A year or so ago I wrote a CRC calculation program in ARM assembler. It could calculate anything from CRC-8 to CRC-32. This was needed because we were dealing with critical high speed communications and there was a finite real-time window to compute the CRC checksum. The code was optimized using every trick in the books, from decades of doing such work. Fast, accurate, did exactly what it was supposed to do. In production. Working just fine.
I was curious. A couple of weeks ago I asked ChatGPT to write a CRC-32 calculation routine given some constraints (buffer size, polynomial, etc.). It took a few seconds for it to generate the code. I ran it through some tests. It seemed to work just fine.
That's when the question first occurred to me: Would it expose us to liability if that code were to be used in our system? I don't know. I have a feeling it would be unwise to use any of it at all.
Wouldn't it be funny, interesting and perhaps even tragic if we had to have "100% organically-coded" disclaimers on our work in the future?
I've been trying to hold on to using Sublime Text because I really appreciate the super clean interface which helps me stay productive, but I think I'm probably going to have to either go back to VSCode or possibly JetBrains Fleet. Every time I learn a new technology there seems to be an official VSCode Plugin for it that gives you some superpowers. Sometimes they'll get some sort of port to Sublime but I feel like it's falling too far behind now. It does make me pretty sad though because I love it. I think it'll be a long time til anything pries Sublime Merge from my fingers though.
How do people find it compared to copilot?
There are VS Code and JetBrains extensions, it's part of the AWS Toolkit (can be a bit hard to find). https://docs.aws.amazon.com/codewhisperer/latest/userguide/s...
Sorry if my question was misleading - I mean how good/bad do they find it compared to copilot. (Not how do they search for it).
You can search in Blind - it’s fairly behind and the gap would probably be bigger over time.
It was just announced, so any opinions on this would be uninformed.
Nah it's been available in VS Code for a year or two in "beta".
Yes, but this is a new release including a bunch more languages, and thus I assume better functionality.
Used it for weeks internally, finally gave up after deciding it I more than once felt the day's use was a net productive loss even when working with internal Amazon packages (that it should have had a training advantage on vs copilot). Terrible UX and copilot really has just miles more intelligent suggestions.
Tried it for a bit this morning with the "newest" release and didn't immediately observe any improvement, though this is far from objective of course.
That's really helpful, thanks for sharing!
Yes! Finally I have more time to spend in meetings!
New meme: "There's an AI for that."
could not find vim plugin though, pass for now.
Maybe I'm missing something, but isn't it supposed to output multiline suggestions as in the example for everyone? I can't find any option in vscode and just get line by line results (any supported language).
Working for me in Intellij. I write a method name and it guesses from the name of the method what the code should be.
Free for individual use? Sold.
If it’s “free” it means you are the product.
Amazon's totally gonna read my code to show me ads, that must be it.
I expect I'll start to see a lot of "intro to programming" books pushed to my recommendation list, based on much of the code I deal with ;)
Would love to see how it compares to Copilot in terms of accuracy
Since Amazon's getting into this game, what's the chance that they'll use this as an avenue for intellectual property discovery?
Extremely small considering how bad that would be for them.
They already have alienated most of groceries and other online retailers from AWS. wouldn’t make sense for them to do this to others.
Amazon doesn’t have a lack of IP problem, they, like all large companies right now can’t turn ideas into products.
Free for Individual use - curious on the pricing for enterprise use. It feels like next Alexa to me in terms of burning money
I doubt it's anywhere near the billions of dollars in cost as Alexa devices were for Amazon. The only cost here is compute power and engineer salaries, there's no hardware to design and build. I expect most of the compute power costs can be passed back to customers that pay for the system. Really it's just the cost of a modest machine learning team to build and maintain the model--this doesn't strike me as something that will be anywhere near the 14 billion dollar plus and counting sinkhole that is Alexa.
IMO this is to drive adoption. Without customers using your product, it doesn't how much anything costs; there's no revenue. Give it away, increase usership, get feedback, improve performance, and then put a price tag on it. Per usual, Amazon is tardy to the party, and must now compete against an established Copilot product (and GPT).
Difference being that Generative AI is something you can actually make money from as opposed the old voice tech
Does anybody know when / if this is available as an extension for Visual Studio (not Visual Studio Code)?
vim, emacs support. must haves.
strongly agreed!
If something is “free”- YOU are the product. Always ask why they want their software on your computer.
I don't think this criticism is helpful. Why do any products have a free tier and a paid professional/business tier?
The goal is A) to get me to use it enough in personal projects that I convince my manager to pay for a business license, and B) encourage me to use more AWS API stuff (which CodeWhisperer is fine tuned on), where AWS makes the bulk of their revenue.
I have no qualms with either motivation.