GitHub Copilot for individuals available without waitlist, with free trial
github.comDuplicate of this? https://news.ycombinator.com/item?id=31825742
I've found copilot invaluable for throwing together quick scripts, especially in languages I don't quite understand. Writing e.g. a bash script, and being able to add a comment saying
# Print an error message in red and exit if this program returns an error
and have it print out if ! some_program
then
echo -e "\e[31msome_program failed\e[0m"
exit 1
fi
makes it so much quicker to cobble together something that works without having to context switch and go Google something. That being said, I've found when writing more complex code it has a real tendency to introduce subtle bugs that can really catch you out if you're not paying attention.Purely from the amount of time I've saved I'd say it's well worth the $10/mo for my employer (it only has to save a few minutes a day to be worthwhile). Very excited to see how they improve it in the future!
I've found when writing more complex code it has a real tendency to introduce subtle bugs that can really catch you out if you're not paying attention.
Yea, that is basically my experience as well. On balance I feel I wasted about as much time debugging broken copilot code as I've saved from using it.
I totally agree. If you code professionally in a stack where copilot perform well, 10 usd/month is a steal. Assuming you are just 5% more productive with copilot, that would easily translate into hundreds of dollars of savings.
It's for the hobbyists that it's painful. Adding another 10 usd subscriptions might be too much for your budget, especially if you only code occasionally.
It would have been nice of them to introduce a free tier where you could use copilot for a few hours a month for free.
This is unfortunate, because my experience has been that it's far more harmful than helpful when I'm working in stacks/techs that I am comfortable and experienced in- the ones I use professionally-, but extremely useful when I'm working in an unfamiliar language or stack as I often do for hobby projects.
I guess that makes sense. It's mostly useful when working on mundane tasks on popular stacks. Experts working on non-trivial use-cases won't see much benefits.
Personally, I'm not a in a coding position anymore and only code occasionally on stacks I'm mostly unfamiliar with: copilot is a godsend as it saves me from googling every other lines of code to figure out which api calls I'm suppose to do to accomplish the task at end.
I see it as a stackoverflow on steroids.
Even if I don't use it much, I guess I'll have to pony up the 10 usd because I would not want to go back to googling basic syntax for everything when I'm coding something.
Do you think you learn as well or deeply?
Is the act of finding a help for learning?
Do you retain as much? Do you need to? Is that important?
I hope this whole thing ends up like a space saving tool like Google did. I don't need to memorise that API because I can Google the docs. Now maybe that goes a level higher.
Is copilot then not teaching you bad habits that you can't see due to lack of familiarity? Is that a good thing?
The idea of learning via copilot, producing poor code and having copilot learn from that code ... the whole feedback loop ... Is worrisome.
> far more harmful than helpful when I'm working in stacks/techs that I am comfortable and experienced in... but extremely useful when I'm working in an unfamiliar language
Ignorance is bliss.
This smells a lot like a Gell-Mann amnesia effect.
Ha I don't think it's totally that, but it may well be part of it!
I think the biggest thing is that when working in tech I'm unfamiliar with it's extremely helpful to get some sort of skeleton in place, even if it's wrong in some way. I'm going to have to go slowly and evaluate it either way, so doesn't really matter if it's got problems or is less than ideal. What I would do otherwise is just go copy something from StackOveflow and then comb over it to adapt to my needs. Copilot is more or less just doing the same thing, but faster.
When I'm working in a stack I know well, I can quickly put down the code I need and it will generally be pretty good. Copilot can do it faster, but it gets things wrong a lot more often than I do. Since fixing something wrong is a LOT slower than me getting it right the first time, it ends up being more trouble than it's worth.
Maybe they can replace you with copilot and an unexperienced cheaper user.
Honestly, that’s the hope. Putting together well-solved combinations of computer functionality ought to become less-skilled work as technology progresses.
That's been the objective of programming languages for 50 years. It hasn't happened yet, because the essential complexity of programming problems isn't in writing the code.
I think it definitely has happened. We're creating more things, for cheaper.
We're also creating more complex things, that costs more money.
anecdotally, this is where the famous lack of modern skills comes from in engineering culture. If you keep doing the same things you've been doing, you'll look around one day and see that everyone has moved on.
The market for simple SMB websites is a great example. This went from custom HTML+webservices, to Wordpress, and now to WIX/shopify/square. I'd bet the market for SMB marketing will similarly move to near plug+play google/FB offerings.
However if you started out making websites in 1993, then there are a vast array of products and services one could move into over the last 3 decades.
We have just moved up the complexity curve.
It's the same with doctors, lawyers, everything.
If law - the human language equivalent of programming - hasn’t gotten simpler in past thousands of years as new abstractions and complications have arisen, I hold no hope for programming.
Surely the human language equivalent of programming is recipes and other types of written instructions. Law is far more abstract and subjective than programming.
Law is opposed to these sorts of changes due to the business model of the law firm. In the law firm world billable hours are king. Automation reduces billable hours. No law firm wants to do that.
I think this really depends on what kind of firm you're talking about. You could make the same case for contractors i.e. "billable hours are king". Take the example where you need to paint a house. You could hire someone off the street who does it with a paintbrush and rollers or hire a pro with a sprayer and prep knowledge to do it in 1/4 the time and with 10x the quality.
In this context automation could be a tool that a law firm uses to enhance the quality of their product. Personally, I would pay more for a tech-savvy law firm that embraces automation, not less.
A lot of contractors in painting/drywall do piecework rather than hourly. My roommate has been a drywall taper for 20+ years. When he quotes a job it’s a flat rate and then he tries to finish as fast as possible by using his best tools to speed up the job. On the other hand, if someone hires him on an hourly basis he puts away those fancy tools and does a lot more manual work, getting the job done slowly.
His rationale: why put wear and tear on his expensive tools if it’s just going to reduce his income in the end? Needless to say he prefers piecework because he likes to move from one job to the next as quickly as he can. He makes a lot more money that way.
I think it has more to do with having an adversarial law system. It doesn’t matter what new tool you come up with in the arms race. Your competitor will soon have it as well.
Law is different in most places because the “execution model” isn’t intended to be a process, but is instead a mediated dispute.
Maybe they will converge, and all programs will be written in legalese and interpreted by Copilot.
A'la Ethereum's smart contracts?
Code isn't flexible enough to work for laymen. It feels like you need to be a developer and a lawyer to make a smart contract work as intended.
Maybe that works against my earlier premise, but I don't think that this is the way we want to go.
Or maybe they’ll replace a junior dev with copilot and make the more experienced folks spend more time “fixing” broken copilot code.
Weird that it prints the error to stdout. I mean not that surprising really, I'd just expect error->stderr to be a pretty low-hanging association
I've seen many many scripts that ignore STDERR and just print everything normally.
Yes, but that's a perfect illustration for one of copilot's essential flaws: "a lot of flies eat poop" (side note: is that actually a saying? Asking because in German it is, and it fits perfectly here).
A lot of code is of mediocre quality. An ML service that learns from huge amounts of code without an ability to tell "good" code from "bad" code will only ever be able to produce mediocre code, at best.
> side note: is that actually a saying?
I've never heard it before but i'll be using it now :)
It's saying "just because it's popular doesn't mean it's good" and it's doing it quite well.
Which may still be of value if you know and can recognize mediocre code.
Some code is like a giant pile of dirt; you need someone to pile it up and then you can go in and clean up the edges and make it "good" whereas other code is entirely delicate all the way through.
The big question is how much is each one and can it help. I suspect it helps for many, but those who know enough to recognize where it can flaw will have an advantage.
But newer programmers may never really "learn" the code the way the older ones do, as they'll just let the computer do the basics.
So, Copilot is a _junior_ developer. Isn't that a matter of just managing expectations?
If you don’t quite understand the language, how do you know it doesn’t also have subtle bugs?
Holy **. This is exactly the kind of use case ML code generation should NOT be used for. If you don't even understand a language you will probably not be able to debug subtle bugs that are introduced, especially in something like bash. Please don't do this for anything that touches prod.
I've been using Copilot for a few months and...
Yeah, it makes mistakes, sometimes it shows you i.e. the most common way to do something, even if that way has a bug in it.
Yes, sometimes it writes a complete blunder.
And yes again, sometimes there are very subtle logical mistakes in the code it proposes.
But overall? It's been great! Definitely worth the 10 bucks a month (especially with a developer salary). :insert shut up and take my money gif:
It's excellent for quickly writing slightly repetitive test cases; it's great as an autocomplete on steroids that completes entire lines + fills in all arguments, instead of just a single identifier; it's great for quickly writing nice contextual error messages (especially useful for Go developers and the constant errors.Wrap, Copilot is really good at writing meaningful error messages there); and it's also great for technical documentation, as it's able to autocomplete markdown (and it does it surprisingly well).
Overall, I definitely wouldn't want to go back to writing code without it. It just takes care of most of the mundane and obvious code for you, so you can take care of the interesting bits. It's like having the stereotypical "intern" as an associate built-in to your editor.
And sometimes, fairly rarely, but it happens, it's just surprising how good of a suggestion it can make.
It's also ridiculously flexible. When I start writing graphs in ASCII (cause I'm just quickly writing something down in a scratch file) it'll actually understand what I'm doing and start autocompleting textual nodes in that ASCII graph.
100%
The thing ive most enjoyed is that it forces me to write out what I want to do in english before getting stuck in the weeds of how the code ought to work.
I've found if I explain the whole program ahead of time (the other day I wrote some python that converted the local time to display on a 13x13 grid of LEDs) co-pilot can write 90% of the code without me, just translating my explanation into python.
I thinking knowing how to express yourself to AI will be a unique skillset akin to being "good at googling"
> co-pilot can write 90% of the code without me, just translating my explanation into python.
I fear copilot may encourage these type of pseudo-code comments. The most valuable thing the AI doesn't know is WHY the code should do what it does.
Months later, we'll get to debug code that "nobody" wrote and find no hints of why it should behave that way, only comments stating what the code also says.
Seems we're replacing programming for reverse engineering generated code.
I can understand where you're coming from, but if a developer commits auto-pilot code without understanding it, that's not really auto-pilot's fault.
That dev could have done the exact same thing with stack over flow snippets. And create the same situation.
Sure its easier to make mistakes when copilot suggestions are so readily available, but its just a tool that needs to be wielded properly as any other.
It feels like an evolution of your typical IDE niceties that modify characters as you type.
I still remember when people were worried autocomplete would lead to code mistakes and variable mix-ups.
Now the one argument against this is if we become shielded from the full input and outputs of a tool.
It would work bad, but you could have a "copilot(code_fragment, args, ...)" that makes an executes a snippet blindly, hoping it's correct. That's when it stops being a hammer and starts being a boss looking over your shoulder and telling you what to do.
Fortunately, I think we have a while before AI can reliably spit out useful AST programs. But it could happen eventually.
hm, that is a conundrum to debug code nobody wrote
on the other hand, if an improved AI comes out in a couple of years, we can feed it the same pseudo-code and enjoy an improved output.
I would rather have a docstring explaining what the code should be doing
I've had co-pilot write its own comments too, my favorite one was, "this is a kind of a hack but it works", very professional indeed!
> It's excellent for quickly writing slightly repetitive test cases;
Ever considered parameterization?
Then you have harder to reason about test that can fail in its complex parameter matrix. I think it's okay for test to be a little bit repetitive
> Then you have harder to reason about test that can fail in its complex parameter matrix.
Can you give an example of tests that would be better with repetition than parameterization?
Yes, it writes the different cases out for you. The arguments...
I had access to the beta for a while, but only tried it out a few days ago.
I was skeptical going in, but ... wow. There were a lot of jaw-drop "how the hell did that just happen?" moments.
The systems ability to quickly learn form local code is especially impressive.
I had to implement a non-trivial Rust trait for about 20 types, which is not just copy-pastable between types or I would have used a macro. On the first one Copilot didn't have a clue what I wanted. The second one was halfway auto-completed. The other 18 were mostly just generated correctly, with some minor fixes required.
It literally was 5x faster than without Copilot.
And that's for a rather niche language with not that much code to learn from... I didn't even try it out with something like Java or Typescript.
Even in this early iteration the productivity boost would easily be worth 100+ for me, even though I'm not working all that much on repetitive code like REST endpoints or UI components.
The productivity boost is absolutely worth the money. It makes programming more enjoyable.
This is not good news. It's actually changed from being free to costing $10/month.
The productivity benefits are worth more than $10/month easily, but somehow I still don't want to pay for it... maybe it's because they're using public domain code to train the model.
Nitpick - they're not (just) using public domain code for training - they're using "publicly available sources, including code in public repositories on GitHub."[0]
This includes a lot of code under copyleft licenses, and possibly even more code under no license at all (implicitly All Rights Reserved). It's not obvious to me that it's ethical (or possibly even legal) to sell a model derived from code not in the public domain.
I wonder if it'd be legal to train your own model with a similar architecture but using input-ouput pairs generated from copilot itself (fair use right?). Sell it for $9/month.
Good idea - VSCode doesn't even have a dumb "autocomplete from all buffers" a la Emacs. Of course, LSP is awesome when available, but I'd also use the dumber version every day for a few specific cases...
(I think those cases would be mostly full lines that I know exist on other files in the project - but I don't want to go there, copy and paste if I can avoid it..)
I disagree that it's not good news.
Copilot clearly cost money to run, so it couldn't be given away forever. By putting a business model on it, it means it's less likely to be rugpulled in the future.
>The productivity benefits are worth more than $10/month easily, but somehow I still don't want to pay for it..
Are the productivity benefits worth more than $10/month?
I've easily saved a couple of minutes each day by not having to search some API docs, since Copilot already knows how my variables should fit into the function call. Even if it's just a minute a day, 20 days a month, it works out being worth $10 easily if you're on a typical western software dev salary.
Uh, yeah. Consider how much programmers are paid per hour. Or how often you look up Stack Overflow or language documentation.
If a programmer is paid per hour, then this is only worthwhile if it increases their hourly wage. Otherwise, it's just another cost.
Now, if they are paid per job and it makes them more productive... That's different.
The majority of programmers in the US are salaried.
Isn't the lookup part of the learning?
If you just get the solution the learning gets smaller.
Yes, but I do not want to learn all the internals of every random API, if copilot can help me with avoiding some, I am interested.
That one would be easy to measure for yourself, because that "worth it" depends on your number of hours worked a month and your compensation.
Let's be on a more conservative end and say that an engineer gets paid $60/hour (i know that most engineers are salaried, so you will need to divide the monthly pay number pre-tax by 160 hours to get that hourly number). If copilot saved more than 10 minutes of your time a month, then yeah, it is worth more than $10/mo.
Do the math on this one yourself, based on how many hours you work in a month on average + your compensation for that time period.
Yes, for sure. I’m entry level but given variable overhead (cost of HR, health insurance, time spent not coding, etc.) it only needs to save me about 5 minutes/month to be break even.
I’ve been trailing the beta for the past few months and plan to recommend a corporate account to our leadership once it becomes available.
As a graduate student on a very small salary? absolutely.
The calculus is $10 for a pizza or something for myself, or copilot.
Given that copilot has saved me too many hours to count, i.e. the 1 thing that truly matters, then it's a no-brainer.
You don’t want to pay for it because you like free stuff. So does everyone. But that doesn’t pay the bills.
You don’t need backcraft some moral argument. It’s trivial to see that a large amount of creativity went into developing the system, it’s not just a repackaging of public domain works.
The whole controversy was that it was a "repackaging of [licensed] works": https://twitter.com/stefankarpinski/status/14109710611816816...
Copilot's model is a derivative work of GPL-licensed code, but its source is not available anywhere, which violates the GPL.
Computers don’t violate licenses, people do. If you’re writing MIT code and copilot tells you to include Quake’s square root algorithm and you do so, that’s on you. If Google tells you to include Quake’s square root algorithm and you do so, it’s on you. If stack overflow tells you to do so and you do, that’s on you. In all of those cases it doesn’t matter whether the platform’s software is open source, why should it for copilot?
It’s the poorest of craftsmen who blames their tools. Rise above.
Also, note that simply accessing an online service does not constitute a distribution of software, and thus does not fall under GPL's domain. And "derivative" appears nowhere in the GPL.
The beta was implicitly understood to be free temporarily and they never indicated otherwise.
I don’t mind the cost although I do wish it was a single charge along with my existing GH subscription instead of separate ones. Seems like a missed opportunity to bundle things.
I don’t care about getting a discount, just dislike being billed on two different billing cycles and it seems like a missed opportunity for them to get more devs on their paid tier.
If it were trained with public domain code, I’d feel a lot better. But it’s done with code with all sorts of restrictive licenses. The only thing that will change that is new laws (e.g. fair use de-exemptions for ML products)
> maybe it's because they're using public domain code to train the model
Do you also consider it ethically questionable to look up oublic domain code on for inspiration (like on StackOverflow) while being paid $20,000/month?
Because I certainly do that plenty. I think most of us do
I’m relieved it’s only 10$ per month. And my company will get the bill anyway.
because you thinking is free, you paying to have it think is not free. If saving time saved YOU money, then it would be worth it
My mind is blown by the people saying $10/month is too much.
$10 a month for this is an unbelievably good deal!
If you value your time, this is a ridiculously good deal.
If you're looking at it from the perspective that my employer pays me $X/hour to write code, and CoPilot is less than that price, then it might be a bargain in that sense. On the other hand, my employer pays me for my experience and knowledge. Most of my time isn't spent writing code, so I don't think it's fair to just look at an hourly dollar amount when deciding whether it's worth it.
That being said, my biggest issue with CoPilot is that it's a ML system trained from open source and public repositories. Also straight from the website, "By using GitHub Copilot [...] you help to improve GitHub Copilot." I'm now paying for the privilege of handing over my data to GitHub so they can combine it with open source code to make more money off of people who are convinced it's somehow saving their company money.
It is if you're not making any money from your side-projects. Seems like everything's a subscription nowadays, all adds up.
I used to be in that camp but honestly, as long as you keep a spreadsheet of subscriptions it's really easy to manage, and more convenient than the older one time purchase for a large amount model (at least for me)
> My mind is blown by the people saying $10/month is too much. $10 a month for this is an unbelievably good deal! If you value your time, this is a ridiculously good deal.
Maybe people who used it didn't find enough value in it? Is that their fault?
I haven't used it, but it's certainly not immediately clear to me that there is enough value there to justify $10/mo.
You should give it a go if you can spare the time. You'll need a credit card but they give you a free trial.
5 minutes in my IDE and I'm already sold on trying it for the next month.
Eh. I might use it if my employer obtained it for me, but I'm not gonna go out of my way to pay $10 to Microsoft for it.
I'm also still skeptical about the legal aspects of it. Microsoft says that training the model is fair use. Good for them, but that's not applicable to me in any way.
Honestly it puts the legal burden on YOU. Think of Github Copilot like a fancy pants search engine, it mixes and matches stuff to your query, and doesn't even tell where it copied it from! Indexing search results, fair use. Using that code in YOUR codebase without a license :)
The value from copilot seems to be very dependant on exactly what you are coding, in which language and what libraries/frameworks you are using. I've been using it on and off for about a year and on the whole consider it at best a net neutral in terms of value. For every 10 times it saves me 2-3 minutes typing out some boiler plate it costs me 20-30 minutes to sort out some weird bug or subtle gibberish it has introduced in my codebase. Half the time the suggestions are just obviously wrong.
Agreed. I was just thinking about it and I’d probably pay $100/month. Maybe even more.
(Shhhh don’t tell GitHub!)
Copolilot very rarely gives me relevant results, I definitely would not pay for it.
you're holding the phone wrong :P
Github copilot regurgitates my colleagues highly niche code verbatim. I know its his because all the same bad variable names are used.
It doesn’t train itself on your local code. It’s probably using your colleagues code in your codebase as part of the input to decide what you want. Someone else in a separate project will never see Copilot recommendations of your colleagues (as long as GitHub doesn’t change their ToS)
It's on github.
Someone else running copilot in a different workspace wouldn’t see those recommendations, you can try it yourself in a new workspace.
This isn't the first time it's done this. If I recall it regurgitated the doom source, along with the swearing.
Yes, because that code is duplicated exactly hundreds of times over across the internet. Your colleague’s isn’t.
His is duplicated (uncredited, without his permission) numerous places on the internet. And github reproduces it verbatim. So now I expect it to be in even more places.
My biggest issue with Copilot (and the reason I don't really use it anymore) is that it got in the way much more than it seemed to help. I work with Typescript codebases and in VSCode, you get very nice intellisense autocompletion on objects. What I found is that with Copilot that totally breaks down where Copilot will override intellisense and provide you with a hint.
I found it fun to play with for smaller projects, but during my day to day work I found that it always seemed to get in the way. You're trying to type out a function and Copilot is always there going "do you mean this", eventually you just turn it off.
I'm sure there are some good use cases for it, but in my line of work I found it to only really be useful for small things and toy projects where you are trying to demo the capabilities of Copilot more than you are trying to actually build an app.
Does VS Code not show the two sets in different interface elements? In Jetbrains Rider is shows suggestions (completion) in a dropdown, and it shows Copilot inline. And you press a different key for each.
I’m not sure which keybinds you have enabled (possibly through VS Studio style?), but in PyCharm/Rider for me (IDEA keybinds) both completions and Copilot use tab to complete, which as the user you replied to has said, tends to get in the way and trip me up as often as the relevant suggestions provide something of benefit.
Have been using Copilot in the beta and it's just been amazing. I can't remember how many times my jaw has literally dropped as it knows what I'm trying to do, or yelled "Holy sh*t!" as it feels like it's reading my mind.
That said, was highly disappointed with the switch to non-free. Maybe they never did say it would remain free, but they certainly didn't advertise it wouldn't be later. I feel bait-and-switched.
Yes, $10/mo. isn't a lot if you're getting paid to work, but if you're developing on side projects that aren't (yet?) making any revenue, it's kinda a dealbreaker.
Not really, $10 is reasonable even if you’re a hobbyist who never expects to generate any revenue.
Reasonable doesn’t mean a hobbyist would pay $100/year for this. That’s not pocket money for most of the world.
Have you ever thought that hobbyists exists outside US and Europe?
I live in Iran, one of the worst economics of the world. 10$ is 4 meals and it's still worth it imho.
Iran's economy is still doing good comparing to many countries of the world.
Africa is saying hello
Anyone is interested in starting to work on an open source alternative? I have a few terabytes of GitHub repository archives, a bit of experience and an non-profit umbrella (goal is AI Safety).
If your goal is safety, you shouldn't be pushing capabilities or general availability of capabilities.
There is open source alternative. It shouldn’t be hard to Google it
pray tell, what about co-pilot is dangerous?
I think he hopes his Copilot competitor will be used to increase the safety of future AI systems that do not much resemble Copilot in purpose or architecture. (I agree with Eliezer that it will probably decrease safety on net despite his good intentions.)
The Copilot or Google Collab are interesting as they allow to create wast datasets of developer's work sessions, incrementally, line-by-line editing and improving code. Exploiting such datasets have interesting potential. And currently that potential seems to be limited to Microsoft and Google.
It might be a good idea to have something in the open as well. But I agree, this has its disadvantages. Also chances of a small non-profit team coming close to Microsoft/Google's effort are near-zero. The amount of resources and time is incomparable.
How long before repos start being poisoned with:
# Print an error message in red and exit if this program returns an error
rm -rf /If it doesn't train itself on your company's codebases, I don't see much use for it. I spend much more time understanding requirements than writing code. By the time I know what to actually code, the coding part is pretty easy. The hardest part about coding is making sure what I write is high quality and fits in nicely with everyone else's code and the established architecture of the module I'm working in. If Copilot can't say "Oh, I see we're using this existing function or service to do this part, let's not duplicate code" I don't see it being useful.
I don't need help parsing a string or iterating over a list.
On the one hand, This is another clever pricing scam. Which Microsoft wins either way.
Unless you want to pay for your IDE + Copilot with JetBrains, this will still benefit only Visual Studio Code, which Microsoft knows that you cannot beat free. So this a great resurgence of 'Embrace' with free developer tools.
On the other hand, Co-pilot is going to probably ruin the Leetcode, Hackerrank, Codility candidate as a candidate can Co-pilot the solution if not checked properly.
If GitHub charge money for a tool producing derivative works of other people's code (maybe without permission, OSS or not) perhaps they should be compensated.
To use a famous example: if an AI were taught to perform heart surgery by viewing footage of skilled surgeons ... would it not be fair that those surgeons earn an on-going amount for their input? An input that is essential to the AIs function.
I worry this is the pointed end of the data hoarding big Co Internet. Our data is used to train models that (potentially, in future) degrade _our_ value while giving nothing back despite being used, unwillingly, to train their models.
Then degrade the data.
Time for programmers to get the same rights as musicians and composers.
That's clearly a remix of the work of others
They already do, no? Problem is this magical solution requires ignoring all copyright law. We can't prove it but when someone does, it'll be a legal shitshow.
Which programmer or his descendants receive payments up to 70 years after his death or per call of his written software?
And yes, MS is now monetising the work of others.
It doesn’t require ignoring copyright law. It’s fair use. Which is a bad thing. We need new laws for this stuff.
good point
If I use a ii V I progression in my music, who gets compensated? I find most of co-pilots suggestions are similarly basic. Any lawsuit about whether an artist used a few of the same chords or a snippet of melody has been totally asinine IMO, like how all the money from "the thong song" went to ricky martin just because he was quoted for one measure.
when I write a few lines of code myself, should I search GitHub to make sure no one else has included the same algorithm in their codebase before slapping my own license on it?
Read Cariou v. Prince before you get too excited.
I wonder how long until state actors realize that copilot would be a perfect vector for getting developers to introduce subtle vulnerabilities into their own projects?
By its very structure it's output always looks credible, and it's not always right-- it wouldn't be a sign of foul play if copilot suggested some code that looked just right but happened to backdoor your cryptosystem or protocol.
Maybe it would be a little tricky to get it to produce NOBUS vulnerabilities that were credible mistakes, but if the target isn't OSS then nobus isn't really that important.
I have given my coworkers, friends and family members a waiver to punch me in the face until I come to my senses the day I decide to use this tool.
Well it was good while it lasted. $10 is too much for me.
Agreed, I feel like my limit was 5 bucks. I get good suggestions that help me prevent repetitive patterns but I don’t get any useful “give me code that does X” as advertised.
I don’t get any useful “give me code that does X” as advertised.
I found it actually worse than useless in those cases. Often I will type a function name it will populate my function body with code that at a quick glance looks like exactly what I want, but at closer inspection is actually complete nonsense. Trying understand if the code it suggests actually does what I want or not is often slower than just writing the code.
Exactly how I feel, the code it produces like that (unless it’s regurgitating a pattern I am already writing which is useful) is usually wrong in some subtle way I have to figure out.
Have you tried tabnine?
12/mo (annual pricing) or 15/mo at month to month is definitely more than 10/mo.
This kind of comment, "co-pilot can write 90% of the code without me, just translating my explanation into python", about copilot troubles me. If the code has been mostly produced by co-pilot, does an user of copilot have sufficient grounds to to claim being the author and assert rights based on copyright? Globally?
The United States Copyright Office has explicitly rejected AI-created works.
>the Office will not register works “produced by a machine or mere mechanical process” that operates “without any creative input or intervention from a human author” because, under the statute, “a work must be created by a human being
https://www.copyright.gov/rulings-filings/review-board/docs/...
I had a similar discussion with a lawyer friend on the basis of copyright on artwork generated by ML models and what constitutes "artistic feedback" from the human. The tl:dr; of that story is that it depends on jurisdiction that you are probing/looking at. In UK you have copyright, in US you do not.
I suppose that something similar is at play here.
10$ a month. Meh. Got enough subscriptions nowadays, that's way too expensive IMO as someone who can only use it as a hobbyist (doesn't work on my work machine due to policies).
Poor show Microsoft - should've just been free as goodwill and to help people build software that can be deployed to Azure for the real money.
I was pairing with a coworker recently who had co-pilot turned on. They were driving and I was walking them through something and I was kind of mind blown at how many times co-pilot seemed to suggest exactly what I was going to say. Some of the variable names were off and minor stuff like that but it definitely seemed like it knew when I was trying to do. I haven't turned it on yet personally but I was very impressed.
This was expected and $10 a month seems reasonable for the value.
I still constantly get surprised by how good it is. Just now I had a function I was procrastinating and thought would take a long time to write. When I finally sat down to actually do it I typed the function name and Copilot just autocompleted the whole thing for me. I didn't have to modify it at all. I wonder what other things I am unnecessarily procrastinating.
I just got on the yearly Plan. Just couple of days before I just wanted to share how happy I have been using Copilot. It is definitely a productivity tool in your box.
I cannot share the code here, but to explain in simple terms I was able to write around 100 lines of code in say 10 seconds. I had a switch statement, around 7 variables with around 5 if conditions inside each case. I retrieve data from end point. Do JSON serialization. Convert JSON to a data class. Use the data . Loop over the data and then work with the elements.
Copilot, understood the first two conditions of what I was doing and just completed the rest for me. Replaced it with right variables, type annotations, etc.
I would definitely recommend this to any one at this price point. Sure, I was so happy using it for free since the launch of the product and did not expect it to be coming so soon out of waitlist. But it is ok to pay for it.
I use it on both VSCode and Neovim and it works beautifully with both editors.
PS: I was early adopter of TabNine too, have used it for quite some time, however Copilot would be my preference.
Lots of $10/mo is too much/great deal discussions here. Wonder if anyone would pay $30/mo for this?
This is the coolest and among the most useful tech I've seen in 10 years
Its gonna feel absolutely crazy having to look up every line of code and random functions when working in unfamiliar projects now that I'm used to copilot. But im a student, so I'm good for now I guess. It makes soo easy to write small and trivial scripts without having to think too much about the syntax that it's amazing
I would like to see a full telemetry dashboard on how Copilot is helping me. Plenty of metrics to bubble up to end user.
I wonder if this is because enterprise / business demand would be substantially higher and they're worried about making it generally available due to load and they wouldn't be able to handle it?
Unless I misunderstand something, its not yet for businesses right?
I am glad now I can pay and rely on it for real. As long as it was beta, we are trying things, there is a chance that rug can be pulled or that they decide to charge something outrageous. I am happy to pay $100/yr for it.
Pay to train your replacement
@dang Merge with https://news.ycombinator.com/item?id=31825742 please?
If I thought there was a chance that my employer would pay for this I'd ask, but no hope really.
isn't making it paid service will greatly limit its training data in the long run? am I thinking too narrow here?
Nice. Not even halfway through my CS degree and my would've been future job has already been automated. Thanks GitHub!
still no PHP support :(
Was this even useful? I have to review a lot of absolute shit from humans. I can’t even imagine the damage that such a large corpus of humans plus an AI can generate.
It's useful in the right hands - it can suggest functions/language features you don't know about it, it can do some handy stuff automatically (like reverse a list in place) so you have more mental bandwidth to think about other stuff.
It also produces a ton of horrible, nonsense code. I totally agree that if github's corpus starts to fill with that stuff, the overall quality of github will tank and I wonder how they'll continue to train the model.
That’s really my worry. ML is crap in crap out. If you train it with more crap it’s an exponential decaying curve.
Now I have to gate declining quality marketed as a time saver. It really just moves the cost elsewhere.