Android vs iOS: A Developer's Perspective
whereoscope.wordpress.comI totally agree with some of the points: iOS should really receive the same garbage collector that OS X has had for years, and the provisioning certificate nonsense is, well, nonsense. iOS really does need a side loading mechanism. That you need a Mac to develop on is, I suppose, a negative - you can get going with Android on almost anything.
However, I can't say I've ever had any problem with Apple's documentation: It's clear, well written, generally entirely correct. I must confess, I've never spent "weeks devising and performing increasingly peculiar experiments to figure out how to get iOS to do what [I] want", any more than on any platform. If he's complaining that iOS has private APIs then, well, I'm quite sure Android does as well - private just means "not guaranteed to exist in the same form on an upcoming release". If he's claiming that Android's "openness" allows him to see deep inside the OS to make design decisions, rather than relying on the documentation, then I'd suggest that's a mad development strategy (unless one likes rewriting when new OS releases come out).
The point about the simulator seems to be that Android's is so bad, you have to use the phone. I can't really see that as a plus, as one could do exactly the same thing on iPhone, except that iOS has a working simulation environment for when you want it.
The remaining points, about the initial user experience and development environment are entirely subjective, so one can't really comment either way. His point that developing for Android seems to be "easier" than iPhone runs contrary to my experiences, but what one man finds easy, another might find hard.
Agreed. I had the exact opposite reaction when I started doing some Android dev after coming from iPhone.
I find Xcode to be at least 80,000 times better than Eclipse (memory usage, UI, interface builder, speed, general bugginess).
I also REALLY like Apple's docs and the ease of integrating C code (no NDK!) when you need to do something like real-time audio.
My only really big complaint is the certificate signing process which can be a real pain.
I mean, I can totally see why Android might feel better or more familiar to a Java programmer though.
I have a question. I asked this to a few mac users and haven't received a good answer.
How the hell do you get xcode (or other programs, but xcode is particularly bad) not to end up being a big piles of small windows you can't access effectively because they don't have a dock icon? The only way I found, was to long-click on the xcode dock icon which after a while splatters small versions of the windows everywhere, then scan these tiled windows until I find the right one and click on it. I have to do that atrociously long multi-step process every damn time I wan't to take a glance at another window! This, for example, makes the internal iOS documentation useless to me. At least I can use the web documentation to get the browser dock icon but when I don't have an internet connection I'm out of luck.
While I'm at it, is there any way, when using 'Spaces' to do a desktop change in one click? I'm mainly a Linux user and I'm used to having multiple desktops. OSX also has this functionality, and the 'Spaces' icon actually has four little square on it that represents the four desktops it's controlling. However when I click on one of the small squares, instead of going to the right desktop like it does in Gnome/Linux (and has been doing right since about 1995), it goes into an animation where the four desktop are displayed tiled full screen and I have to pick one. That is a two step process with an animation in between for something that should clearly be instantaneous. Is there an alternative to this. Both these things are driving me insane!
Like in the article, I bought a mac just to do iOS development and up to now my experience on osx felt like using a broken out of date gnome desktop with serious usability issues.
To use XCode in a single window, switch it to the All-in-One layout in the preferences:
http://iphonedevelopment.blogspot.com/2009/03/xcode-single-w...
Also, in any OS X application with multiple windows you can use Command-` (i.e. Command + backtick key) to switch between windows of the active application.
I'll add this because it has happened to me - sometimes you're in all in one mode and you still don't see your code, just the file name that you selected at the top of the window - this is because there is a horizontal divider that is pulled all the way to the bottom of the window. Look for 1 little dot at the bottom of the window in the center, if you drag that up you'll be able to see your code again.
All-in-One is kinda nice. It seems it should be the default. It still doesn't make the documentation window of any use though. I guess it has a little less chance of being hidden under a pile of other windows now. Command+backtick doesn't seem to do anything on my mac.
Strange that Cmd-backtick doesn't work for you. Try this: Open up System Preferences, select "Keyboard", change to the "Keyboard Shortcuts" tab, and highlight "Keyboard & Text Input" on the left. Is the "Move focus to next window in application" box checked? What's its shortcut?
Also, long-clicking on the dock icon just activates Expose for Application Windows. I have it set-up to activate upon mousing to the top right corner and really like it. You don't have to wait for the delay of clicking and holding on the dock, though you'll still have to scan for the correct window.
I wish I could vote you up more than once. Not using all-in-one layout in XCode is Considered Harmful. This is even more important than binding Open Quickly to CMD-O or some other quick key shortcut.
I've got a few tips that might make your experience a little better. Option 3 below will be particularly helpful for accessing the documentation more easily.
Option 1: Re-map your Expose´ keys. I mouse (tablet, actually) right handed, so I use the following shortcuts that I can access them quickly with non-mouse hand:
F1 - All Applications
F2 - All Windows
F3 - Desktop
F4 - Spaces
This allows me to use Expose´/Spaces via keyboard in tandem with the mouse. While technically two steps, it feels more like a single, coordinated step to me.
I use XCode in the Condensed (not All-in-One) layout, which results in lots of small windows. I hit F2 (or F1 if I've secluded XCode to a single Space), then either:
- mouse over the various windows and press the space bar to see a zoomed-in view of that window. Then, click the one you want to bring into focus.
- press an arrow key to highlight a window, press space to see a zoomed-in view, press arrow keys as necessary, and then either press F2 again (or click the left mouse button) to exit Expose´.
Option 2: Press Cmd + Shift + D. This will bring up the "Open Quickly" dialog box. Start typing the name of the file you need to open or bring into focus.
Option 3: Use shortcuts to go immediately to the definition of a class/method/protocol/etc., toggle between .h/.m files, or open the documentation to whatever's under the mouse.
- Press Cmd key then double click a class name (or method name or whatever). This immediately opens a window to the definition.
- Press Cmd + Option + Up Arrow to toggle between .h and .m files for a class
- Press Option then double click a class/method/etc. name to open the documentation in a floating window.
- Press Cmd + Option then double click a class/method/etc. name to open the documentation in the XCode documentation window.
Mac OS X is littered with these kinds of accelerated interface shortcuts. I wish I could point you to a good, consolidated guide; but I have yet to find one on the web. Several of them can be found in the opening chapters of Aaron Hillegass' Cocoa programming books.
Don't know on the first one, but you can use ctrl + cursor keys to move between spaces. Fairly sure that's the default, but if not, it can be setup in preferences for Spaces.
It doesn't seem to be the default but I guess I can find out how to set it up. I still think you should get a mouse based one click interface though.
Ctl+1 through 9 switches spaces directly. If you enable the spaces menu bar in system preferences, you can click on the icon and choose which space to switch to (in snow leopard at least).
Grab Xcode 4. It's still beta and buggy but it's all one window and is pretty good IMO.
There's keyboard shortcuts for spaces. And CMD` to switch between windows in apps.
OS X isn't perfect, I switched from KDE 5 years ago. Eventually you start noticing all the little touches that mean it is—really—light years ahead of Linux desktop options.
It does have a lot of little "Apple" touches though that you'll just have to take with a LOL.
Another tip: ctrl-shift-d brings up the "File > Open Quickly" dialog, where you can type the name of a file into a filtered list (eclipse also has something like this). So instead of finding the right window, just bring up whatever file you need via the keyboard. I find this faster than using the mouse or cmd-`.
Another shortcut I used frequently is cmd-shift-e. This toggles "View > Zoom Editor" so that I can see more code.
Well, there's Cmd + ~ hotkey for switching windows of active apps in 10.6, and it can be added using some small app in 10.5
It existed off the bat in 10.5 :).
Cmd+` has existed for almost as long as the MacOS itself - and I'm not just talking about MacOS X here. Interestingly, BeOS also used the same shortcut, maybe because a number of Be engineers initially worked at Apple and the original BeOS was Mac-only. You can also use Alt+` in GNOME with the same effect, though you might have to enable a setting in GConf first.
There is also an expose option to show all windows for an application. Set it to a keyboard hot key.
I'll make just one refutation to this: I am not a Java programmer :)
I had to learn Java specifically for this project. Python is my preferred hammer for most nails, but not an option on mobile. I've also been a professional C programmer before, wreaking havoc in the kernel. I've got opinions on Objective-C, but that's a subject that deserves a whole separate post.
Just a quick reminder: You don't have to use Eclipse to develop for Android. You can use any other Java IDE (I use the free version of IntelliJ IDEA) or no IDE at all. You can even use a combination of XCode and Maven to develop for Android.
Garbage collection on OS X appeared in Leopard with Objective-C 2.0—later than iPhone, but before iPhone SDK. Older models are still around and even iPad has less memory than iPhone 4 so it is not unreasonable to treat it as resource too precious to be left ar the mercy of GC. On the other hand, manual management is not that complicated and does not add much overhead in programming except the initial phase of getting used to it.
If they do ever introduce it, I sincerely hope it is completely optional. On my iPad app, I run with such tight memory constraints while dealing with large image files that it has to be freed up when I tell it to or I'm likely to get killed by the memory watchdog.
The existence of a garbage collector doesn't mean that the programmer has to play a totally hands-off role in the allocation of objects; as with any coding, there are usually a few bottlenecks that deserve special care. A system with a good GC will provide opportunities to tune; for example, to make hints to the GC about lifetime and locality and so on.
I would expect it would be if they follow the pattern from OS X. I do wonder what the breakdown in new OS X apps between GC and retain/release.
I treat memory as a resource too precious to be left to the mercy of programmers. In any application, a good garbage collector can be more efficient than malloc/free or new/delete.
How about compared to retain/release/autorelease?
?? I admit the last time I really coded in Obj-C was for NeXTSTEP. But back then init ultimately called malloc, and release ultimately called free. In any case, yes, a good GC can be faster as it takes advantage of cache effects.
I upvoted you, but I was mostly just quoting the jwz article from 1998.
Older iPhone models cannot run the current OS anyway.
And manual memory management is not a showstopper, but to say it's "not that complicated and does not add much overhead" is just wrong. It's conceptually simple, but the devil is in the details. I would bet that if you took a poll of all Cocoa programmers who have been working in the field since — well, pick any date you think qualifies as "out of the initial phase" — and ask them how many of their programs have done correct memory management without debugging, you will get an answer of 0.
This is true only for the oldest (1st gen). iPhone 3G and 3GS run 4.2 just fine. Sure, some of the most prominent features of iOS 4 are missing on 3G, but it still does not qualify as not running.Older iPhone models cannot run the current OS anyway.I don't disagree with the premise that memory management requires more work; however, the tools and utilities that Apple now provides to hunt down memory issues makes it more tedious than difficult.
Clarification: We don't use any private API's on iOS, James' comments were with respect to what we do with location and networking.
Apple's documentation is great for most visual elements, but CLLocation* in particular has quite flawed documentation.
Where are the flaws in CoreLocation or the documentation? CLLocationManager is instantiated like any other NSObject. CLLocationManagerDelegate returns asynchronous results like any other protocol in Cocoa/CocoaTouch. CLLocation, CLHeading, CLRegion are about as close as you're going to get to get to C-style POD structs in Objective-C.
The documentation is all here and the API is about straight forward as it is going to get. http://developer.apple.com/library/ios/#documentation/CoreLo...
I don't want to veer O/T, but that's not the hard stuff in CLLocation.
There's a lot that could be discussed, but as one example: optimizing for battery conservation requires knowing which radios are currently powered up.
iOS makes its own decision as to which of the 3 styles of location service to engage, based on the desiredAccuracy and distanceFilter values.
However the WiFi/GPS radio have different costs for runtime and warmup, so this API doesn't help when you are attempting to optimize for all 3 of: Accuracy, Timeliness, Battery conservation
Furthermore, the accessible battery percentage in UIDevice.batteryLevel is only reported to the nearest 5%, which is not granular enough to be of use in real-time server-based tweaking.
There's a lot that could be discussed...
There isn't a whole lot to be discussed. iOS minimizes battery usage based on how close cell towers are, type of cell tower, altitude, Wifi spots database, state of GPS almanac, hardware/driver combination, etc like a any good OS/kernel (including Android) should.
The App sets CLLocationAccuracy to kCLLocationAccuracyBest, kCLLocationAccuracyNearestTenMeters, kCLLocationAccuracyHundredMeters, kCLLocationAccuracyKilometer, or kCLLocationAccuracyThreeKilometers and that's it.
The OS will almost always be able to minimize battery usage because it has way more information than the App could or should ever have. The App will never be able to optimize this setting because there will always be new hardware with new tradeoffs.
Your claim is that Apple's documentation does not describe the implications of the laws of physics for an arbitrary location that the user may reside at and does not reveal implementation details of the kernel that change every hardware/software revision.
The API could not be much simpler or much better documented.
Do you really think that Android's documentation is better? http://developer.android.com/guide/topics/location/obtaining... http://developer.android.com/reference/android/location/pack...
Because the Criteria class on Android is essentially the same as CLLocationAccuracy. http://developer.android.com/reference/android/location/Crit...
I really would like to know if you think that Apple's documentation is poor or if you are just trolling because the facts of the situation don't support your argument.
Which part of the documentation is "quite flawed" like you originally claim?
So, I will say that I bundled documentation and "openness" into one box, which I probably shouldn't have. The connection is that absented of the "openness" we were really looking for, we sought documentation to describe what's going on. That said...
CoreLocation is a good abstraction. However, its primitives for specifying what trade-offs you are willing to make in acquiring a location are, well, primitive.
A couple of random examples:
- CoreLocation doesn't tell you the source of the location sample (GPS, WiFi, etc). It gives you an estimate of accuracy. Of note, it doesn't give you a measure of the accuracy of the accuracy. This is of import as we have seen examples where the data is off by a whole hemisphere -- I'm not kidding! I understand that exposing these details is kind of "ugly", but obscuring it is removing signals that we could use to figure out the reliability of the data, and what techniques we might be able to use to "clean" the data. I am willing to concede that CL is a good API for general use, but when you're building consumer products, that doesn't cut it. The guy in New York who was reported as being in Antarctica (again, seriously), doesn't really care that the iPhone doesn't provide us the tools to fix that, he just wants it to work (and he's no longer a user).
- Related to the first point, but separate: the implementation of the algorithm for seeking to the desired accuracy is a black box. This makes it really easy to use for basic stuff, but you really don't have any way of knowing the result, in milliwatts, of passing in a given value to that argument. There are ways to mitigate this (which we've had to explore), and we can experiment to learn what the drain is, approximately, but hiding that information has obstructed our development process. Consider also that CL doesn't allow me to specify how long I am willing to wait to get the location fix at the desired accuracy. It does not let me set a budget in milliwatts for a location fix. I realise that Android doesn't provide those exact abstractions, but the tools it does provide make it easier (by which I mean "possible") for me to build them myself.
I also don't think you've successfully made the argument that the OS will know better than us. It's a generic tool, which makes some assumptions. It will have made compromises that don't necessarily work for us. It will not perform optimally for every use case. Going back to something I said before, it seems to optimise for getting to the desired accuracy quickly. For background location tracking apps like ours, that is not a priority. Power is. Neither CoreLocation's abstraction nor documentation provide for this use-case.
"The OS will almost always be able to minimize battery usage because it has way more information than the App could or should ever have."
This is only true for standalone apps which don't share location information between people.
The OS doesn't know that one person wants to turn on the GPS on another person's phone.
I'm actually pretty pro-Apple and anti-Android (James, OP is the Android developer and I develop exclusively on Obj-C).
I'm not sure I agree with the blog post then. The Android docs certainly do not detail this kind of information either.
In fact, I've found the docs for Android to be of the "broad but shallow" variety. That is they cover everything (thanks to Javadoc) but they don't necessarily provide usage guides on the various APIs.
My opinion on Android development vs. iPhone is:
a. GC is convenient
b. The developer docs on Android are not nearly as good as Apple's
c. XCode (while not perfect) is less buggy than Eclipse for basic development activities. Eclipse just gets in the way most of the time and can't keep up with my typing speed which is really frustrating.
d. I like ObjC's dynamism a lot more than Java. This is a personal preference.
e. Its nice to be able to peer into the OS source on Android as the definitive answer for API questions but I suppose if the docs were better you wouldn't need to do that.
f. The Android APIs are full of pattern inconsistencies with their implementation on the whole compared to UIKit which make them more difficult to learn.
g. UI Layout on Android is abysmal compared to using Interface Builder.
i. Again - this is more of a Java thing, but Android lacks the ability to do conditional compilation. You have to go through build system/scripting gymnastics to get what you would get from a simple #IF statement in C.
There are some things that are in the private apis that make dealing with certain problems a 100 times easier. Sometimes you just want the gui to force a rotation to a certain orientation at some point of the app. Other public api's such as rotate top bar are more crufty, and don't work 100% of the time, or require fragile workaround where you do all the rotation manually yourself because the public apis are fragile, while the one-line simple private api would of done the trick.
"even now when I show Whereoscope on Android to iPhone users, I need to explain the basics of navigating an Android phone to them before they can use it."
I just can't let this stand. I have an iPod Touch and an Android, and I struggle a lot with the iPod Touch. Even making the MP3 player (iTunes?) do what I want is a challenge, and that is a native Apple app. I also had a lot of problems with iPad apps when I tried the iPad of a friend. The lack of a back button is a problem if the browser pushes you into some other app (YouTube or Maps), for example.
I could go on and claim that Android usability is so much better than iPhone (which I personally feel it is). But lets just assume that this guy is used to the iPhone and hence can cope with it better than with Android.
Also, if his users struggle with his app on Android, it is probably his fault. What is stopping him from giving it the same interface as the iPhone version? iPhone has one button, Android has 4. So it should be possible to use the same interface on Android, assigning one button to behave like the iPhone button.
Btw, you don't actually have to use Eclipse for Android development. You can do everything with the command line, and hence integrate the development environment (simulator, build script) into any editing environment you want. I am not sure if the same is possible for XCode, but I don't think it is. If XCode does Java, you could probably even use XCode for Android development.
Actually we started off with that - having the same interface for both Android and iOS.
But Android users don't expect Android apps to behave like iPhone apps - so the affordances don't carry over.
They kept tapping the context menu, or holding down list items: actions which are normal on Android but nonexistent on iOS.
The shoddy state of the simulator really irks me on Android - it's really necessary that it works well, because there are so many different models of phone.
The Android version of my app apparently has a crash-on-startup bug on a single type of Android phone (Droid X), shows up as windowed in others, and works just great on the Nexus/Droid I've tried it on. I can't test on all that physical hardware, though, and the emulator is slow enough that it's nearly useless - various background services on the virtual machine complain about timing out when starting it up.
The fragmentation of that market doesn't seem worth dealing with for the amount of activity on the marketplace.
I actually have a pretty massive patch for QEMU sitting in a computer somewhere (with a pretty massive bug) so I know QEMU pretty well. QEMU is actually very fast if used correctly. Android is not using QEMU correctly. I'm not really sure what they're doing wrong but if I can virtualize a VMM which then virtualizes another OS and the interaction is essentially real-time, basic ARM and Java should not be out of the performance target.
P.S. If you think QEMU is slow, don't even think about Bochs.
QEMU is a truly awesome piece of code. Everything Fabrice Bellard does is incredible.
You could well be right that they're not using it correctly -- that sounds entirely plausible. I guess my point was more that, whatever the cause, the net effect of it is that the Apple Simulator is unrealistically fast, and the Android Emulator is unrealistically slow. Neither really encourage great development if you rely on them.
Agreed -- I just wanted to defend QEMU (and my own work, by extension) for a moment.
Also, code can always be made to go slower so I'm not sure if "unrealistically fast" is as bad a "unrealistically slow."
It's actually much easier to "virtualize another OS" than to virtualize a hardware platform (and associated OS).
If you are running guest code that matches your host architecture, QEMU can run the code natively. If you are running foreign code (e.g., ARM on an Intel host), it has to dynamically recompile the code, which will hurt performance.
kb
Actually, my research was simulating a different hardware architecture and the patch I have is for hardware-assisted virtualization under arbitrary hardware architectures. It does have to recompile the code, but the TCG does a pretty good job of that. The virtualization target can do a lot better by keeping TCG in mind while structuring its binary execution, which may be one thing that they don't try to do in Android.
The lack of a decent simulator also affects web apps. I downloaded the SDK and installed it in hopes of using it to test mobile versions of sites. It's rather difficult to test when it can take ten minutes to fully load and render a page!
I can second this one thousand times. I developed the blip.tv Android app. It runs well on the Samsung Galaxy S phones, but crashes on startup on _all_ other Android devices. No clue why.
Do you have a debugger attached? I've found that Android's emulator is a bit slow, but not unreasonable; but then attaching Eclipse's debugger slows it down by two extra orders of magnitude, which makes it unusable,
I think it's easier to do simple things on iOS and easier to do complex things on Android. I built a simple iOS app a while ago and was amazed at how easy it was. I didn't customize a thing and kept it all looking exactly how the built in libraries made it look. It was just a matter of throwing some things into interface builder and wiring them up. Then I wanted to do the same thing in Android and was immediately baffled by this crazy HTML-like language that would never work quite right and was horribly verbose.
Now, however, my company is developing an iOS app, and we're following screens given to us by a designer. I think I wouldn't mind that layout language now...
I found that matches my experience fairly well.
I haven't found Apple's documentation to be significantly worse than Android. Parts of the documentation are sparse at best (the Cocoa layer is great, lower-level stuff less so), but overall both systems have good documentation.
Fully agree about Apple's certificates - it feels like I have to spend an hour or two every few weeks trying to figure out some provisioning profile problem. By now I think I've gone through almost every possible thing that could go wrong with them, so it's a lot faster to fix, but it was incredibly frustrating at first. Apple automated some of that through XCode a few releases back, but that stopped working after a few months and I haven't been able to get it to work again - back to doing everything by hand.
Also fully agree about the Android emulator.
All in all, the two platforms are very close in terms of difficulty - they each have different downsides. I'm a lot more familiar with the iPhone, so Android development goes a bit slower, but I suspect with similar amounts of experience there shouldn't be a significant difference in development time.
An interesting perspective. It really seems like from the programmer's point of view, Android has found a nice sweet spot in productivity - a nice, comfortable, garbage collected but CPU-slow programming environment to do all of your 'OnClick' programming, and then the NDK and C/C++ for when CPU time matters.
iOS puts you in the C/C++/ObjC world for just about everything, unless you want to slog through Javascript. It's been rumoured that Apple is working on a version of MacRuby for iOS - this can't come fast enough.
The feeling you get with XCode is that it slows you down. I'm a java developer by day, iPhone dev by night. Eclipse/Java are _light years_ ahead of XCode/Objective-C. My gripes with XCode include everything from the build process, the awkward debugger, Interface Builder, and all the way down to little minutiae like key bindings that just don't make sense (try selecting a block to indent it, every other IDE in the world uses tab, XCode uses Cmd-] wtf?).
Not to mention Objective-C, which, as you allude to with garbage collection, is a far more inferior language than java. There's things like passing undefined messages to objects which only generate warning at compile time, and sometimes those warnings don't appear in XCode -- so when your code doesn't work, you're left scratching your head. And why isn't the + operator overloaded for string concatenation?
Finally, Objective-C is a very awkward language to use at the keyboard. Object notation [] in particular slows me down a lot - somehow (at least for me) it's easier to type () than it is to type [].
Thanks for the article, now off to download the android SDK!
P.S. I'm not saying Java is a very elegant language - far from it - but, in my opinion, it's more elegant than Obj-C.
Textmate uses ⌘[, skEdit uses ⌘[, CSSEdit uses ⌘[, Coda uses ⌘[, BBEdit uses ⌘[.every other IDE in the world uses tab, XCode uses Cmd-] wtf?Objective-C for me personally is probably the second language by elegance after Ruby.
Sure, but as counter-examples, take Eclipse, Visual Studio, Netbeans, Notepad++, Textpad, XMLSpy...
From your example, it's apparent that Mac-only editors use cmd-[. This is a great idea as it promotes cross-platform compatibility and makes life so much easier for developers. /snark
Likewise, I have a problem with Visual Studio and Eclipse, and their debuggers: F6 is step-over in Eclipse, and F11 is step-over in Visual Studio.
By the way, try using the Visual Studio debugger in a virtual machine on OS X. Another example of an awesome key-binding in OS X (who uses F11 anyway? Let's assign it to something system-wide!)
+1 on the IDE stuff of your comment.
But Objective-C is a really nice OO language. It seems you do not get its dynamic nature. Yes, its different, but by no means inferior. And yes, tooling is much better for static languages like Java.
You may be right that I don't get Objective-C. I understand that you're not actually calling methods on objects, but instead passing messages to it. What I don't get is why the IDE and the compiler accept invalid method (message) signatures. For example, if you tried [[UIView alloc] mymethod], XCode won't say anything (unlike Eclipse which would catch that mymethod does not exist), and if you try compiling it, you will only get a warning: "UIView may not respond to mymethod". On top of that, XCode won't always display that warning, so you can run into some serious trouble.
Even if XCode were consistent, suppose you are working with some frameworks (like OpenFeint) which, when compiled in your code, have a couple of warnings here and there. How do you tell your warnings from theirs?
More importantly, calling a method (or passing a message) with an incorrect signature should absolutely be a compile-time error.
I'm not trying to be facetious here, I really do run into these problems. I'm genuinely asking why Objective-C and XCode should be considered on equal footing as some other languages?
(P.S. I'm really fired about XCode and Obj-C because I work with it almost daily and these things bother me to no end, so hopefully you're not interpreting my passion for wanting to improve these tools as smugness or arrogance directed at you)
This is pretty much the opposite of my opinions. Android documentation is awful. Mostly it isn't there at all. Many times I found that it was in fact wrong. iOS documentation is really good. Garbage collection on mobile devices means you get UI stutter because GC kicks in when you don't want it which means your app feels less slick. Xcode 4 is pretty good, but yeah Xcode 3… However Eclipse is slow, clunky and buggy.
Also I have found myself doing weeks worth of hacks on Android AND iOS. Both are large frameworks, and ultimately they don't have abstractions for everything you may want to do.
The article reads like the guy hasn't really got his feet wet with Android development yet. He's yet to be bitten by not handling the activity lifecycle correctly for instance. The real edge cases of that didn't start materialising until we had 20,000 beta testers.
I was interviewed on this topic in fact: http://www.androidpolice.com/2010/11/14/developer-interview-...
I'm stumped why memory management is so hard for developers, to the point I have to raise an eyebrow every time I read it. Are you seriously that lazy?
The docs about it are fairly straight forward:
http://developer.apple.com/library/mac/#documentation/Cocoa/...
Instruments makes it exceedingly simple to track down leaks.
While the iPhone 4 could probably handle a GC in most cases, the iPad less capable.
XCode is a personal preference.
I dunno, the time I've spent with Android, and A/B'ing respective apps, Android has almost always "felt" slower. I get that's totally subjective, but that's been my impression. For example, Angry Birds on the Galaxy Tab versus Angry Birds on the iPad are no where near the same experiences. The Galaxy Tab is jerky and slow, while the iPad is smooth.
I still don't get why memory management is so hard for you though.
And don't get me started on those lazy kids and their assemblers. I mean, how hard is it to remember a few dozen opcode hex values?
Less snarkily: Developer resources are not infinite. Time spent futzing with memory management in non-performance-critical areas is time not spent improving performance where it actually matters, adding features, or improving the user interface.
For example, Angry Birds on the Galaxy Tab versus Angry Birds on the iPad are no where near the same experiences. The Galaxy Tab is jerky and slow, while the iPad is smooth.
The Android code for Angry Birds is primarily in native code, so garbage collection is unlikely to be the cause of your observations. And it's perfectly smooth on my Nexus One.
How is the most ignorant comment in this thread voted up?
First of all, he offers no evidence that memory management in objective-c is resource or effort intensive. He can't, because it's not. This is not hard:
How is that hard exactly? There are only 4 rules you need to follow for memory management in Objective-C/Cocoa/CocoaTouch:-(id)initWithSomeNumber:(NSNumber *)aNumber { ... someField=[[SomeObject alloc] init]; someOtherField=[aNumber copy]; // or retain, your call. ... } -(void)dealloc { [someField release]; [someOtherField release]; [super dealloc]; }* If you use a convenience method, eg [NSString stringWithFormat:...] you don't own it, so don't release it.
* If you use alloc, copy or new, you own it, so release it.
* Implement dealloc to release fields you own.
* Never invoke dealloc directly.
I mean you can be as snarky as you want dude, I write Cocoa apps all day long (as well as web apps and mobile apps) and I can tell you and the original poster, are either idiots or lazy. I'm going to go with lazy.
It's totally about laziness! But that's what computers are all about -- I could store printed versions of all of my documents in a filing cabinet, and go and manually sort them every time I needed a different ordering, but I'm lazy! I use a database!
I just don't see why laziness should be restricted to users. Developers are lazy too.
You're right that there are only 4 rules (or more or less depending on your formulation), but I don't care. I'd rather take the time to have another martini. Or, y'know, implement features that make my users happy.
And it definitely gets harder when there are more moving parts. You're right that the rules are simple, but the execution of those rules gets more complex as you add more components, more threads, remoting, etc. I never said it was impossible, or up there with Fermat's Last Theorem or anything like that. Just that this is work the computer could be doing for me. I want to be lazy, but Apple won't let me.
Memory management is Obj-C has issues. The documentation is spotty at explaining how to property handle it, or maybe it is an issue with the organization of the documentation cause I can't always find what I need, but I know it is there. When I started, I keep hearing the general rule of thumb, "if you don't alloc/init it, you don't need to release it." Except no one mentioned you have release properties. I never alloc/init them. Along with that, difference between an ivar and property is poorly documented. The fact that "aField = x" is different than "self.aField = x" and can mess up memory management was lost on me.
My problem with the documentation, as I said before is mostly organization. Along with that, some of these concepts are defined in ways that only make sense if you already know the language and framework.
After a couple projects and code reviews, I think I understand it now. It is not that we are lazy for not understanding or liking it; it is that it isn't intuitive or simple like GC.
If your property is defined as retain in your header, you are claiming ownership for it and so it needs to be released in your dealloc method.
This is covered in the basic Memory Management doc http://developer.apple.com/library/mac/#documentation/Cocoa/...
I love Obj-C dearly. But Obj-C's reference counting scheme is, and always has been, much more cognitive load than a GC. In Java the above would be roughly:
...and that's it. No dealloc, no forgetting to release, no forgetting to retain objects you hold onto, no trying to figure out how to handle cyclic references (or indeed if certain references are cyclic at all). Just set it and forget it.public myObject(Number aNumber) { someField = new SomeObject(); someOtherField = aNumber; }I've written memory managers used in console games for the N64 (4Mb), PS2 (32Mb) and a complete memory tracking system for Unreal Engine 3 on XBox360 (OMG 512Mb).
I use MonoTouch for iOS development. Why? Because I can build complex object graphs without having to maintain, in my head or in external documentation, the acyclic version of the graph. In a GC language I can create graphs that are happily cyclic, and have every reason to be, and yet have them deallocated when the application nulls the last reference.
In a reference counted system that desires cyclic pointers, one or more of those pointers have to be chosen as non-reference-incrementing pointers - and likewise one must remember not to release them either.
Memory management in Objective-C (without GC) is no effort provided one is making applications with simple object interactions, such as the plethora of tree-based hierarchies. Reference counting is great for that.
So it could be that the OP is lazy, or it could be that they have experience with problems that you do not. If you believe that OP actually requires a lesson on the basic rules of reference counted memory management (since thats what you provided) then I suggest that you are underestimating the experience of your detractors, which in turn leads me to believe that you are overestimating yours.
This is very well put. Much more insightful than my glib comments below. Thanks dude!
This isn't rocket science. The linked document takes all of 10 minutes to read and understand. If that is not enough, there are three years worth of WWDC sessions on memory management available on iTunes U (this year's are even free!). It's something as basic as learning a new language's if/else structure.
For reference, I'm a five year Java dev who has now been doing iOS dev for two years, and had no problem switching from GC to manual memory management.
I wasn't talking about the Nexus One, I was talking about the iPad vs the Galaxy Tab. And I didn't say that GC was what was slowing down Angry Birds, I was talking about subjective relative performance between Android and iOS.
The point of my criticism is that memory management in objective-c doesn't take any extra time beyond overloading dealloc and writing a few more retains and releases for 90% of all cases. It's just plain lazy, imo. Your comparison to writing raw assembler is nonsense and ignorant.
memory management in objective-c doesn't take any extra time beyond overloading dealloc and writing a few more retains and releases for 90% of all cases
It's the other 10% where the memory leaks that bring down your app come from.
Angry Birds on the Galaxy Tab runs smooth as hell. It runs better than my Angry Birds at my ipod touch..have you actually tried it on the Galaxy Tab recently?
I have to say that developing on Android after having worked on iPhone is a bit like waking up from a vivid nightmare
I wrote most of the original justin.tv iPhone broadcaster app, and the above is very very true for me. Never again, hopefully.
One point that the article doesn't mention is the iPhone Simulator, is a simulator: it simulates the iPhone environment.
Simulators have both good and not so good points. On the one hand they are pretty fast, since they use "host" code and "host" APIs. On the other hand, since they use "host" API, you can't rely 100% on them.
For example, the iPhone Simulator simulates the iOS OpenGL ES API using Mac OpenGL API. While developing cocos2d for iPhone I found many differences between the Simulator and the Device. But in spite of that, I still suggest developing mostly everything on the Simulator, and every now and then to try the app on the device both to test the performance and "reality".
I totally agree with his arguments. Learning Android development has been a breeze with the great documentation and ease of deployment developers. You just have to put up with all the other non-technical aspects of things (fragmentation, uglier UI, etc).
I can't be the only one that hates Android's documentation. It's nothing more than a list of methods and properties. Thanks for nothing, Google, I can use Eclipse's code completion for that. The Android docs have never helped me once. I've always had to rely on web searches when seeking help.
The iOS docs, on the other hand, is rich and full of example code, example usage and programming guides. There's a lot of hand holding, which is great. Android's SDK is definitely simpler and makes more sense than the iOS SDK "out of the box", but I feel like Apple provides enough documentation on important classes like UITableView and UINavigationController.
However, iOS is strongly MVC, so I can understand how a newbie can feel a little lost with all the ViewControllers and various project templates. (Which one do I use? How do I use Navigation Controllers without starting a Navigation Controller project template? Etc...) A "Hello world" example in iOS creates a lot more files than an Android one.
I also did an app on both OSs and this article matches my experience exactly.
The provisioning in iOS is truly awful. However, in the XCode4 beta this has been greatly improved. You just click on a couple buttons within XCode itself, and everything gets automagically set up.
I've heard that it still causes problems if you're trying to clear out old profiles, but Apple seems to have been trying to provide a good fix to the worst part of iOS development.
I don't think this author can be all that fair. If he had trouble wrapping his head around manual memory management, how far could he have gone in iPhone development?
Sounds like Google are trying the "Developers Developers Developers" stragegy from Microsoft.
I have to say that developing on Android after having worked on iPhone is a bit like waking up from a vivid nightmare
Can I plug MonoTouch here then? I just ported a native app to MT. It is approx two seconds slower to load - which I do think is a big deal - but the productivity benefits ( = new features) vastly outweighs that drawback.
Stopped reading after the blatant display of apostrophe usage misunderstanding and unnecessary insertion of recent popular culture reference in the article. That is to say, after the first 3 words.
I was tempted to, as well. But it’s at least a marginally interesting piece, summarizing ups and downs of developing for both platforms. (And I think he used the word “inception” literally.)
The inception link was just a bit of fun :)
Thanks for your comments, and the typo in that first sentence was pretty bad. Sorry about that. I've ceded the grammatical highground for the foreseeable future with that gaffe.