Settings

Theme

PhantomJS: Archiving the project, suspending development

github.com

586 points by gowan 8 years ago · 137 comments

Reader

emilsedgh 8 years ago

Chrome and Firefox gaining headless modes is the ultimate goal Phantom could've achieved.

So I consider it a complete success.

Kudos to all contributors.

wgjordan 8 years ago

This project has been effectively dead since April 2017, when Vitallium stepped down as maintainer as soon as Headless Chrome was announced [1]:

> Headless Chrome is coming [...] I think people will switch to it, eventually. Chrome is faster and more stable than PhantomJS. And it doesn't eat memory like crazy. [...] I don't see any future in developing PhantomJS. Developing PhantomJS 2 and 2.5 as a single developer is a bloody hell.

One potential path forward could have been to have PhantomJS support Headless Chrome as a runtime [2], which Paul Irish (of Google Chrome team) reached out to PhantomJS about. However, it seems there hasn't been enough interest/resources to ever make this happen.

[1] https://groups.google.com/d/msg/phantomjs/9aI5d-LDuNE/5Z3SMZ...

[2] https://github.com/ariya/phantomjs/issues/14954

micimize 8 years ago

Timeline of what lead to this, from what I could gather:

• phantomjs is 7 years old, @pixiuPL has been contributing for about 2 months

• @ariya didn't respond to his requests for owner level permissions

• @pixiuPL published an open letter to the main page of phantomjs.org https://github.com/ariya/phantomjs/issues/15345

• the stress leads @ariya to close the repo.

• @pixiuPL intends to continue development on a fork

This is a good reminder of why non-technical skills are so important in OS and in general.

TheAceOfHearts 8 years ago

Some people are mentioning headless Chromium, so I wanna mention another tool I've used to replace some of phantomjs' functionality: jsdom [0].

It's much more lightweight than a real browser, and it doesn't require large extra binaries.

I don't do any complex scrapping, but occasionally I want to pull down and aggregate a site's data. For most pages, it's as simple as making a request and passing the response into a new jsdom instance. You can then query the DOM using the same built-in browser APIs you're already familiar with.

I've previously used jsdom to run a large web app's tests on node, which provided a huge performance boost and drastically lowered our build times. As long as you maintain a good architecture (i.e. isolating browser specific bits from your business logic) you're unlikely to encounter any pitfalls. Our testing strategy was to use node and jsdom during local testing and on each commit. IMO, you should generally only need to run tests on an actual browser before each release (as a safety net), and possibly on a regular schedule (if your release cycle is long).

[0] https://www.npmjs.com/package/jsdom

  • AlphaWeaver 8 years ago

    Cheerio [0] is fantastic for this as well...

    [0]: https://www.npmjs.com/package/cheerio

    • TheAceOfHearts 8 years ago

      I've tried Cheerio as well, but I prefer JSDOM since it exposes the DOM APIs. What I'll normally do is interactively test things out in the browser's console, and then transfer em over to my script. Browser dev tools are just super amazing.

      • madeofpalk 8 years ago

        Agreed - I find the Cheerio APIs to be awkward when traversing deep into the DOM. Last time I used Beautiful Soup I found it also had this problem. The DOM API that JSDOM provides is such much more natural to work with.

      • pitaj 8 years ago

        Cheerio is so much faster and more lightweight than jsDOM. If you can do it with Cheerio, you absolutely should.

    • Rapzid 8 years ago

      Using Cheerio with TypeScript types to parse data from tables in downloaded HTML files. Great tool.

  • madeofpalk 8 years ago

    One question I've had recently is how to scrape out a Javascript object out of HTML source. With server-side react + redux, I've wanted to be able to scrap out the serialised var __STATE__ = {...} object to JSON, from nodejs. Best solution I cobbled together was to basically eval() the JS source, which I know is far from ideal.

    • seeekr 8 years ago

      You could use a parser like esprima or its equivalent from the babeljs ecosystem on the JS source instead and just find the global variable with name `__STATE__` and just eval its init expression. Cheaper, more secure, more direct than actually running the JS.

      • madeofpalk 8 years ago

        I actually looked into this (from reading docs, never wrote code) and I wasn't able to find a way to convert the AST for the ObjectExpression into JSON or an actual Javascript object.

        • seeekr 8 years ago

          What you need is a code generation library that will turn the AST back into JS code once you've identified which part of the syntax tree you're interested in. And that's the code you want to then eval(). Esprima has escodegen for that purpose. I'm not sure what the counterparts are in the babel world. Feel free to shoot me an email with any specifics of where you're getting stuck thinking this through (email should be visible from my profile?), and I'll be glad to help.

    • TheAceOfHearts 8 years ago

      You can use the vm module [0] to securely execute the code.

      [0] https://nodejs.org/api/vm.html

  • tonto 8 years ago

    Right before release is a bad time to realize there are problems with your build

  • h1d 8 years ago

    jsdom is pretty unforgiving and won't load broken HTML. That's where I stopped using it. Could use some tidy tool maybe.

  • draw_down 8 years ago

    jsdom is an impressive achievement, but it may not be what you want depending on what you’re trying to do. It doesn’t mimic the behavior of browsers well in a number of regards, so it will let you do things that real browsers don’t allow. If you’re doing integration-type testing that can lead to tests that pass but functionality that fails in real browsers.

enitihas 8 years ago

For those who haven't looked at some of the commits by @pixiuPL, the list is here : https://github.com/ariya/phantomjs/commits?author=pixiuPL.

To summarize: It does not look like the guy has done a single commit with any meaning. His commits are basically the following:

1. Adding his own name in package.json 2. Adding and deleting whitespace. 3. Deleting the entire project and commiting. 4. Adding the entire project back again and commiting.

Just out of curiosity: How likely is that someone may be able to use a large number of such non functional commits(adding and removing whitespace) to a popular open source repository to boost their career ambitions.(e,g. Claiming that they made 50 commits to a popular project might sound impressive in an interview.)

petercooper 8 years ago

Two alternatives:

Headless Chrome with Puppeteer: https://github.com/GoogleChrome/puppeteer

Firefox-based Slimer.js: https://github.com/laurentj/slimerjs (same API as Phantom which is useful if using a higher level library like http://casperjs.org/)

  • mrskitch 8 years ago

    I maintain a puppeteer-as-a-service repo here: https://github.com/joelgriffith/browserless. It’s pretty feature rich at this point, allowing you to specify concurrency, sessions timeouts, and comes with a robust IDE (which you can play with here: https://chrome.browserless.io).

    I’m working on building out a serverless model, which is the holy grail of headless workflows, but it’s a bit more challenging to operationalize than one would think.

    I’m hoping that these efforts will lower the bar for folks wanting to get started with puppeteer and headless Chrome!

lukebennett 8 years ago

As has been said, this point was somewhat inevitable with the advent of Chrome and Firefox's headless modes. However, as the project slips into the mists of history, let's not forget the vital stepping stone it provided in having access to a real headless browser environment vs a simulated one. I for one will remain grateful to Ariya, Vitallium and all the team for their efforts.

tnolet 8 years ago

I’m super biased in this, having spend considerable time programming against PhantomJs, Selenium and now Headless Chrome / Puppeteer for my startup https://checklyhq.com. This whole area of automating browser interactions is an extremely hard thing to get stable. In my experience, the recent Puppeteer library takes the cake but PhantomJs is the spiritual father here. I will not talk about Selenium for blood pressure reasons

rumblefrog 8 years ago

Within the issue @pixiuPL created, I listed some of the things that he has shown incompetence on: https://github.com/ariya/phantomjs/issues/15345#issuecomment...

  • mkarnicki 8 years ago

    Nicely put github comment, well done. Thank you. I feel sick in my mouth seeing PL in his username, which clearly indicates my home country. I am beyond baffled.

hrasyid 8 years ago

Ariya wrote a bit about his reasoning here: https://mobile.twitter.com/AriyaHidayat/status/9701730017013... also mentioning an old post in https://github.com/ariya/phantomjs/issues/14541

hartator 8 years ago

I still think it's premature. There is still couple of fields PhantomJS is better than Headless Chrome. Notably proxy support, and API aviability.

  • ComputerGuru 8 years ago

    Yes, but what was in it for Vitallium? Continue working thanklessly on a project to serve others’ needs, who has a whole will leave en masse as soon as headless chrome gets to parity with proxy support?

  • transreal 8 years ago

    That's not really true. You can use proxies with Headless Chrome using the --proxy-server command line parameter. And the API is richer that PhantomJS. See the underlying API documentation here: https://chromedevtools.github.io/debugger-protocol-viewer/to....

    • hartator 8 years ago

      It's only for proxy without auth. So mainly local ones. There is no way to use username and a password right now for proxy with headless chrome.

redka 8 years ago

Well with Chrome going headless there isn't a whole lot of place for PhantomJS anyway. Or is there? What is it still good for?

  • apocalyptic0n3 8 years ago

    Legacy systems for one. The Cooperative Patent Classification group releases their classifications en masse as HTML (single zip download, which is great). I built a parser for a PHP project that could parse all several hundred thousand records from the HTML in a few minutes. In 2017, they switched to a system that loads in the data from JSON stored in Javascript in the HTML (it is every bit as terrible as you imagine). Obviously loading in the HTML and trying to use regex to match the JSON was a terrible idea (especially since it was encoded to boot...), so I instead used Phantom to load each file, render it, and save it to a temporary file which I then parse using the original pre-2017 parser. Like 10 lines of code in Phantom to do it.

    Obviously with my situation, this is not the end of the world. I use the parser twice a year and Phantom will continue to handle that task just fine. But I also know that the switch to using headless Chrome would be an expensive one if necessary; we have to research it, we have to update local dev environments, we have to implement it, we have to write new tests for it, we have to test it, we have to updating our deployment strategy, update our server deployment configuration, and, worst of all, get all of these changes and new software installations approved by the USPTO which is a nightmare. My situation is simple, but would take several weeks to several months to actually deploy to production. As it stands, I will likely have to explain why we have a now-unmaintained piece of software on the server and may be forced to switch regardless.

    I can easily imagine how this project sunsetting, even though there is a clear alternative and successor, could be a nightmare to a lot of people. It's not the end of the world, but it's definitely unfortunate

    • feelin_googley 8 years ago

      Is this the data you were trying to parse?

      https://www.cooperativepatentclassification.org/Archive.html

      • apocalyptic0n3 8 years ago

        Yes, but I just realized I was mistaken. The data I was talking about was the International Patent Classification. CPC was XML, IPC is HTML, and the former/now-deprecated US patent classification system was plain text. I have to deal with all three on a regular basis and have built importers for all three, and I forget which one is which.

        IPC can be downloaded from the link below. I needed the Valid Symbol List. Looks like they fixed the encoded JSON that was there when they first put out the new format.

        http://www.wipo.int/classifications/ipc/en/ITsupport/Version...

    • redka 8 years ago

      Why would you need PhantomJS for that? Can't you just parse the HTML files with Nokogiri and be done with it? That would be orders of magnitude faster anyway

      • tnolet 8 years ago

        Big misunderstanding in browser land. The HTML delivered to you over the wire, the stuff Nokogiri sees, is not the stuff you see on your screen or even when doing a “view source”

        • nkozyra 8 years ago

          OK, obviously the stuff you see on your screen not matching the HTML delivered makes sense, but explain the HTML source not matching what's sent via the HTTP response. DOM can be modified, of course, JS can introduce more dynamic HTML, but view-source should always represent any non-redirected HTTP response. What is Nokogiri getting that the browser isn't (or vice versa)?

          • joatmon-snoo 8 years ago

            > view-source should always represent any non-redirected HTTP response

            Not the grandfather, but generally in browsers you have two versions of HTML "source" - the canonical source, the stuff pulled down over HTTP, and the repaired source, the version that actually gets rendered.

            I'm unfamiliar with Nokogiri, but I suspect that from context, it doesn't repair HTML in the same way that browsers do.

          • apocalyptic0n3 8 years ago

            > JS can introduce more dynamic HTML, but view-source should always represent any non-redirected HTTP response

            That is both true and false. Because the JS can introduce dynamic content, the source returned by the HTTP response often doesn't match the source that is rendered by the browser itself. In many cases, a site will return a skeleton (just HTML) and then make an Ajax request to populate it. In my case, it was just the skeleton HTML with a few hundred lines of JS plus a long string of JSON

            • Kiro 8 years ago

              But we're not talking about the rendered source here. We're talking about "view source", which afaik always matches what is returned by the server.

              The post replied to claims that Nokogiri doesn't see this however so I'm puzzled.

              • dewey 8 years ago

                "view source" shows the source after all the javascript ran. So what a client that doesn't execute javascript (like curl) sees is different from what you see in "view source".

                That's also the reason while you had to "pre-render" you javascript web apps for SEO purposes until google bot got the ability to execute javascript.

                • madeofpalk 8 years ago

                  I get what you're saying now, but I believe you're mistaken about "View Source".

                  I've never seen "View Page Source" or "Show Page Source" be the current DOM representation. It's always the HTML what came over the wire, the same you'll get from curl (unless the server is going user agent shenanigans, which I think we can agree is out of scope here).

                  If you're talking about the page after Javascript is ran, the only way you're seeing that is by opening the dev tools and looking in the 'Elements' or 'Inspector' panel.

                  I just checked in Safari, Chrome, and Firefox and found this to be true in all of them. The distinction between the View Source and DOM Inspector is very clear.

                • detaro 8 years ago

                  In what browser is this case? Chrome and Firefox it isn't. In the dev tools, you see the rendered DOM, but view source shows you the HTML from the server.

      • apocalyptic0n3 8 years ago

        I had to actually render the HTML and run the Javascript in order to populate the HTML with the data I needed to parse. The HTML does not include the parse-able data by default and is populated at runtime from JSON embedded in the Javascript in the HTML.

        As far as I am aware, Nokogiri isn't capable of that and even if it is, I was unaware of that library at the time I wrote the Phantom solution (only discovered it last Summer but have yet to use it for anything)

        • redka 8 years ago

          No, Nokogiri isn't capable of that so you need an actual browser runtime. I didn't think a downloadable site would have javascript populating the page with data. But if it's only from JSON embedded in the JS from the HTML then I guess it's still possible to retrieve that and unless it requires some processing a JSON is as good as you can get.

          • apocalyptic0n3 8 years ago

            The JSON was encoded (quotes and brackets were both HTML encoded) and couldn't reliably be parsed, or at least not in a way I was satisfied with. Rendering the HTML and actually building out the page as it would normally be rendered and using the parser that I already had built made way more sense. And, at the time, Phantom was the best option I could find for it.

      • forgotmypw 8 years ago

        I think you might have missed this part:

        >In 2017, they switched to a system that loads in the data from JSON stored in Javascript in the HTML

  • minitoar 8 years ago

    Maintaining systems already built on top of PhantomJS.

    • toomuchtodo 8 years ago

      A bit concerning, as youtube-dl relies on PhantomJS currently.

      • netheril96 8 years ago

        youtube-dl will do fine. It is updated once in several days, and with that activity count, I think they will transition to headless chrome in no time.

    • paulie_a 8 years ago

      I am curious about this aspect and probably should do some research, but how will highcharts to PDF work?

      Phantomjs was generally great for that type of rendering

  • epx 8 years ago

    Not sure whether it is as easy to use as PhantomJS.

    • nkozyra 8 years ago

      I'd say Puppeteer is on-par with Phantom for ease of basic use. It has a richer, deeper API, of course, but at its core it's modern Javascript.

      • chucksmash 8 years ago

        +1 on Puppeteer. Using it for something now. For small projects, the ability to have the JS you want to run within the context of the page itself live side by side with your browser instrumentation code feels magical. Head and shoulders nicer experience than in cases where half of your logic is second class code-as-a-string (e.g. trying to work directly with Gremlin Server from a non-JVM language by POSTing Groovy-as-a-string)

        • vorg 8 years ago

          > half of your logic is second class code-as-a-string (e.g. trying to work directly with Gremlin Server from a non-JVM language by POSTing Groovy-as-a-string

          It must be particularly difficult when your Groovy-as-a-string script itself has many strings in its code, which is what a typical Apache Groovy build script for Gradle looks like.

        • epx 8 years ago

          Thanks for the info.

    • redka 8 years ago

      Well that depends if you're stuck with Javascript. There isn't anything simpler (that I'm aware of - bu I do web scraping/automation professionally for about 6 years) than watir[0]. PhantomJS doesn't even come remotely close.

      [0] http://watir.com/

Analemma_ 8 years ago

There is one thing about this that saddens me: PhantomJS still starts up much faster than headless Firefox or Chrome, at least for me, which makes some of our integration tests take a long longer than they should.

Has anyone here figured out any tricks to get headless Chrome booted fast?

sergiotapia 8 years ago

End of an era! Congratulation to team for all their hard work and excellent contribution to help teams build better software.

All the best to everybody!

pknerd 8 years ago

Somehow I am having issue to use both headless FireFox|Chrome. Unlike PhantomJS where all I had to do is to drop the binary and set the path, both FF and Chrome are not following same route thus I am happy to use PhantomJS for a while

isuckatcoding 8 years ago

I would think PhantomJS is still quite heavily used so having some kind of migrator to puppeteer would be useful. I’m sure people would pay $$$ for it.

skrebbel 8 years ago

Thank you, PhantomJS contributors. You built a life saver.

chx 8 years ago

Drupal dropped PhantomJS too https://www.drupal.org/project/drupal/issues/2775653

kschiller 8 years ago

Does anyone here know if there's a way to set SSL client certs with Headless Chrome? With PhantomJS I could use

  --ssl-client-certificate-file and --ssl-client-key-file
Changu 8 years ago

I do lightweight web automation via Chromiums "Snippets". It is super nice to work that way because you see on screen what happens and can check everything realtime in the console. Only problem is that they dont survive page loads. So when my snippet navigates to a new url I have to trigger it again manually. What would be a good way to progress from here so I can automate across pages?

  • icebraining 8 years ago

    Greasemonkey and its descendants (e.g. Violentmonkey) can run user scripts which work across pages.

moondev 8 years ago

I remember taking full page screenshots with phantom back in the day. Really cool project. Nightmarejs is another alt with a friendly api.

rutierut 8 years ago

One of the guys working on P-JS just linked from a GH issue to his open letter... He isn't very happy with the owner blah blah blah and is going to fork the master branch to make phantom great again, I'll just put this here:

"Will do as advised, as I really think PhantomJS is good project, it just needs good, devoted leader."

  • enitihas 8 years ago

    It does not look like the guy has done a single commit with any meaning. His commits are basically the following: 1. Adding his own name in package.json 2. Adding and deleting whitespace. 3. Deleting the entire project and commiting. 4. Adding the entire project back again and commiting.

  • paulie_a 8 years ago

    That sounds slightly ambiguous, is that person going to be that leader, out are they looking for one?

chirag64 8 years ago

Shoot, I was just planning to use this for generating PDFs out of a URL on nodejs. Does anyone know of any other library / module out there that is good at this?

wnevets 8 years ago

is headless chrome's API just as easy to work with? Taking a screenshot or saving a page as pdf is stupid simple with phantomjs

wxyyxc1992 8 years ago

Thanks & Goodbye

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection