GitHub was down
status.github.comWe just started using Gerrit, but using Github authentication. Can't sign in! Not doing our own account management appears to have been a poor decision.
Every time a service like Github goes down, I always swear to myself I'll replace that dependency with a project that takes days, weeks, or months to implement will ensure I'll never have hours or minutes of downtime ever again.
I run a gogs[0] instance, which is a GitHub clone with great performance characteristics. I use it for both company and personal projects, and we haven't had any downtime. No more productivity drop when github is down :D.
I recently discovered another huge dependency I have when coding though : StackOverflow. That's going to be a lot harder to replace though...
[0]: https://github.com/gogits/gogs
PS: Yes, I realize the irony in using a github link here...
>No more productivity drop when github is down
It could have been the motto of RhodeCode (https://rhodecode.com). Self-hosted repository management not only has a benefit of having one's uptime under control, but also is much more secure.
This isn't an argument against third-party account management. It's an argument against single points of failure.
You can (and many websites which do) support multiple auth providers such as Facebook, Twitter, GitHub etc. If you bind your account to multiple of these providers, you can mitigate this risk to a large extent.
Even with that setup, people normally don't bother to associate more than one set of credentials with their account. So you would have to institute a more-than-one credential practice.
Why not using google for authentication? Seems like github is down relatively often.
Google oauth goes down as well, one of the last ones was even discussed here - https://news.ycombinator.com/item?id=11526775
That was more than 3 months ago. Github goes down more often than that.
Blame it on OAuth/OpenID.
I've heard some good things about GitLab but seems like the majority of the OSS community is still on GitHub (...for now).
We're only more reliable when self hosting. GitLab.com is growing very fast and we have availability and speed problems. But I love the 'for now'. We're working to improve .com and we'll keep launching awesome features every month.
That's refreshingly open and honest, thanks!
Why would one assume that their self-hosted git will never go down or will be up more than github?
At least when it's self-hosted, it's your responsibility and you can fix it. When you are anything larger than a tiny company, this is much less risky.
Remember that 'the cloud' just means 'someone else's computers' so you are subject to their infrastructure management practices and uptime guarantees (or lack thereof).
Even for a company much larger than tiny, you're better off with GitHub from a reliability perspective. Most medium companies (up to hundreds of employees) do not have operations staff anywhere near as responsive as GitHub's are.
Look at their status history. The vast majority of companies could not boast such a record for their internal operations.
Not to mention if it's one person who decided to set up the self-hosted solution and then that person leaves. How reliable is it then?
Someone else you pay to keep data safe, up and running to do continuous integration in this particular case.
(Almost certainly) less load, more control over the hardware it's running on.
What makes you think GitLab won't have any downtimes?
You can self-host GitLab, so he's probably talking about that...
GitLab is down for 2minutes per month. GitHub is down second time today for more than 20minutes... can you hear the software and builds breaking? ouch
GitLab is nowhere near the size and popularity of GitHub.
I'm looking forward to the postmortem—GitHub usually does an excellent job with those.
I'm looking forward to the post looking forward to the postmortem - HN usually does an excellent job with those.
I had just posted a link to an issue in github and when I tried to click it to see if it worked, I was wondering if I did anything wrong. Github has so much activity that many people could be thinking the same right now.
I also guess hundreds of persons are wondering if the commit they just pushed killed github's servers.
I did the same. I spent a solid minute trying different things and googling to figure out what was wrong. Checked the status page and bam!
It's still down. :(
Now there is a human Denial of Service attack because everyone is refreshing their pages like theres no tomorrow.
All DOS are because of humans?
When used as an attack, the efficient method is to buy servers or to have a botnet to do it for you.
But a human sets it in motion.
Has anyone got a good Github backup script that's actually robust? Something that will reliably go through all your company private repos and clone the lot. I've found a few, but they're all clunky and in serious need of adaptation. (e.g. https://gist.github.com/rodw/3073987 which is the best I've found so far.)
Hard part appears to be finding one that does an organisation account well, not just a user.
What are other people here actually using?
You could try github-backup [0]. Quoting from the description:
And did I mention that it's written in Haskell!It backs up everything GitHub publishes about the repository, including branches, tags, other forks, issues, comments, wikis, milestones, pull requests, watchers, and stars.Sounds good :-) Are you actually using it? What's your experience using it to try to back up an organisation account?
edit: "github-backup does not log into GitHub, so it cannot backup private repositories." So, looks like that won't do it for us, sorry :-(
Strangely, pulling from and pushing to github works. Just that no page on their website loads.
The git hosting is fairly independent of the web site. It happens fairly frequently that one is down and the other isn't. The git hosting has a resilient distributed back end designed to maintain redundant replicas and be highly available. The web site is, I believe, a huge Rails app.
Not only the website, all the github pages sites were down also
Latest update [0] says:
Service is recovering and we are continuing to monitor.
[0] https://status.github.com/messageshttps://status.github.com/messages/2016-07-21
by the way, why the status from July 15, 2016 to July 21, 2016 is red ?
It looks like they've lost a chunk of data - the last items in my news feed are from the 14th of July.
My activity is missing on my activity page, but the issue I just filed like an hour ago is there on my contributions page. I suspect that perhaps some of the feeds and lists have to repopulate, but nothing is really missing.
same here :(
Github was good while it lasted. Back to SourceForge.
Then if SF goes down we'll go back to emails & patches. Issues will be tracked with post-its & todo lists and communicated by voice.
I really wish we could get the community effects of github without github and have everyone self-hosting repos or even a fully distributed solution on top of something like IPFS.
I am sorely tempted to just start tracking issues as YML or CSV + markdown files in an issues directory. That gets me all kinds of features that GH doesn't provide:
- offline issue management (bugs on a plane)
- issues associated with branches
- grep, awk, ack, sed
- all of the above, available to git hook scripts
The only thing he holding me back, is that I really want to get this right. This needs to become a standard, not "Dan's weird repo that no one understands".
It's not fossil-scm, but git-appraise restores some of the lost distributed'ness:
https://github.com/google/git-appraise
https://github.com/google/git-appraise-web
https://github.com/Nemo157/git-appraise-rs
And there's also http://jk.ozlabs.org/projects/patchwork/ (http://patchwork.ozlabs.org/project/qemu-devel/list/).
Let's just get rid of the darn computers already.
I gave up and started posting to usenet.
Young blood here. Can we go further?
There's only so far tin can and string will go.
19 minutes and twelve pop-under ads later... I can't commit my changes until I punch the monkey and learn one weird trick to make my teeth whiter. Ah, the good old days.
This is the new "It's compiling!"
I would like to take this moment to remind everyone the new episode of Mr Robot is out.
Are there companies out there that include Github as a critical part of their infrastructure such that if the web front end or GIT hosting goes down that their production servers are affected?
I'm sure there are but how many people would actually have been affected by this specific outage which only seemed to affect the web front end?
I've been planning to do a static company page, deployed as a GH organisation page. Maybe I should have a fail over plan that can run off of S3 or something, and a short TTL on the DNS records...
Well that was scary. I pushed some changes and noticed the CI server didn't do anything and it reported no changes. So I checked the PR and sure enough it doesn't show any of my latest commits although git seems to think all is well.
Looks like it's up again.
and down again.
Yeah, I noticed.
Saw that Syncthing had an update for the desktop-version and wanted to check that the Android-client was on roughly the same version number, so that the protocols wouldn't be incompatible.
And because GitHub was down, I couldn't check the changelog, meaning that I had to go to such crazy measures as opening the Syncthing-app to check what protocol-version it displayed, which turned out to be a hundred times quicker than rustling through some changelog, but uh, something something, ramble ramble.
This is why you keep your code in a GitLab self hosted machine with 2 cheap disks in raid. Github should be used as mirror.
I'd be interested to see what exactly caused this. They are usually very good about preventing downtime.
Probably a DDoS from a foreign actor.
What are the main motivations for foreign actors to DDoS GitHub?
There have in the past been some ddos attacks that were most likely by a foreign actor, but they were directed against individual repo's that contained material that the attacker wanted to suppress.
See http://arstechnica.com/security/2015/04/ddos-attacks-that-cr...
Looks like github lost two weeks of its public activity data.
Obligatory "If only Git was a distributed system" snarky comment.
But this is seriously not a good time. Using gh-pages, hard deadline tomorrow morning.
Might https://surge.sh help?
Sure, but the stuff I need is already hosted on github. As is most of the API docs I need.
At the very least I need URLs for my stuff so I can put them in things I'm writing. But I need to check github to find them.
Basically, every 3rd thing I looked up in the last 20min was hosted on github.
GitHub needs to move slower and break less things. When was the last time Google was down for more than a minute?
The world ought to move faster, so that the red on status page would appear green (Doppler effect [0]).
April 11th.
They say everything is operating normally but my latest commits are still missing. Have they lost data?
Everyone should have a mirror on gitlab or bitbucket
Don't worry, it will come back soon
Github downtime is pizzabreak time
All the company stopped, yay!
Major service outage...
Time to procrastinate.
Seems straightforward.
500, everything I ever built is on github including my webpages, I am screwed.
my blog is hosting on github pages.
back to normal now
And it's up!
I guess its up
github pls
How it's possible in the modern world with all software development practices that project as github.com can be down for more than 10 minutes?
It's pretty amazing it wasn't down for more than 30 minutes.