When it comes to wrangling data at scale, R, Python, Scala, and Java have you covered -- mostly
You have a big data project. You understand the problem domain, you know what infrastructure to use, and maybe youโve even decided on the framework you will use to process all that data, but one decision looms large: What language should I choose? (Or perhaps more pointed: What language should I force all my developers and data scientists to suffer?) Itโs a question that can be put off for only so long.
Sure, thereโs nothing stopping you from doing big data work with, say, XSLT transformations (a good April Foolsโ suggestion for tomorrow, simply to see the looks on everybodyโs faces). But in general, there are three languages of choice for big data these days โ R, Python, and Scala โ plus the perennial stalwart enterprise tortoise of Java. What language should you choose and why โฆ or when?
Hereโs a rundown of each to help guide your decision.
R
R is often called โa language for statisticians built by statisticians.โ If you need an esoteric statistical model for your calculations, youโll likely find it on CRAN โ itโs not called the Comprehensive R Archive Network for nothing, you know. For analysis and plotting, you canโt beat ggplot2. And if you need to harness more power than your machine can offer, you can use the SparkR bindings to run Spark on R.
However, if you are not a data scientist and havenโt used Matlab, SAS, or OCTAVE before, it can take a bit of adjustment to be productive in R. While itโs great for data analysis, itโs less good at more general purposes. Youโd construct a model in R, but you would consider translating the model into Scala or Python for production, and youโd be unlikely to write a clustering control system using the language (good luck debugging it if you do).
Python
If your data scientists donโt do R, theyโll likely know Python inside and out. Python has been very popular in academia for more than a decade, especially in areas like Natural Language Processing (NLP). As a result, if you have a project that requires NLP work, youโll face an embarrassing number of choices, including the classic NTLK, topic modeling with GenSim, or the blazing-fast and accurate spaCy. Similarly, Python punches well above its weight when it comes to neural networking, with Theano and Tensorflow; then thereโs scikit-learn for machine learning, as well as NumPy and Pandas for data analysis.
Thereโs Juypter/iPython too โ the Web-based notebook server that allows you to mix code, plots, and, well, almost anything, in a shareable logbook format. This had been one of Pythonโs killer features, although these days, the concept has proved so useful that it has spread across almost all languages that have a concept of Read-Evaluate-Print-Loop (REPL), including both Scala and R.
Python tends to be supported in big data processing frameworks, but at the same time, it tends not to be a first-class citizen. For example, new features in Spark will almost always appear at the top in the Scala/Java bindings, and it may take a few minor versions for those updates to be made available in PySpark (especially true for the Spark Streaming/MLLib side of development).
As opposed to R, Python is a traditional object-oriented language, so most developers will be fairly comfortable working with it, whereas first exposure to R or Scala can be quite intimidating. A slight issue is the requirement of correct white-spacing in your code. This splits people between โthis is great for enforcing readabilityโ and those of us who believe that in 2016 we shouldnโt need to fight an interpreter to get a program running because a line has one character out of place (you might guess where I fall on this issue).
Scala
Ah, Scala โ of the four languages in this article, Scala is the one that leans back effortlessly against the wall with everybody admiring its type system. Running on the JVM, Scala is a mostly successful marriage of the functional and object-oriented paradigms, and itโs currently making huge strides in the financial world and companies that need to operate on very large amounts of data, often in a massively distributed fashion (such as Twitter and LinkedIn). Itโs also the language that drives both Spark and Kafka.
As it runs in the JVM, it immediately gets access to the Java ecosystem for free, but it also has a wide variety of โnativeโ libraries for handling data at scale (in particular Twitterโs Algebird and Summingbird). It also includes a very handy REPL for interactive development and analysis as in Python and R.
Iโm very fond of Scala, if you canโt tell, as it includes lots of useful programming features like pattern matching and is considerably less verbose than standard Java. However, thereโs often more than one way to do something in Scala, and the language advertises this as a feature. And thatโs good! But given that it has a Turing-complete type system and all sorts of squiggly operators (โ/:โ for foldLeft and โ:โ for foldRight), it is quite easy to open a Scala file and think youโre looking at a particularly nasty bit of Perl. A set of good practices and guidelines to follow when writing Scala is needed (Databricksโ are reasonable).
The other downside: Scala compiler is a touch slow, to the extent that it brings back the days of the classic โcompiling!โ XKCD strip. Still, it has the REPL, big data support, and Web-based notebooks in the form of Jupyter and Zeppelin, so I forgive a lot of its quirks.
Java
Finally, thereโs always Java โ unloved, forlorn, owned by a company that only seems to care about it when thereโs money to be made by suing Google, and completely unfashionable. Only drones in the enterprise use Java! Yet Java could be a great fit for your big data project. Consider Hadoop MapReduce โ Java. HDFS? Written in Java. Even Storm, Kafka, and Spark run on the JVM (in Clojure and Scala), meaning that Java is a first-class citizen of these projects. Then there are new technologies like Google Cloud Dataflow (now Apache Beam), which until very recently supported Java only.
Java may not be the ninja rock star language of choice. But while theyโre straining to sort out their nest of callbacks in their Node.js application, using Java gives you access to a large ecosystem of profilers, debuggers, monitoring tools, libraries for enterprise security and interoperability, and much more besides, most of which have been battle-tested over the past two decades. (Iโm sorry, everybody; Java turns 21 this year and we are all old.)
The main complaints against Java are the heavy verbosity and the lack of a REPL (present in R, Python, and Scala) for iterative developing. Iโve seen 10 lines of Scala-based Spark code balloon into a 200-line monstrosity in Java, complete with huge type statements that take up most of the screen. However, the new lambda support in Java 8 does a lot to rectify this situation. Java is never going to be as compact as Scala, but Java 8 really does make developing in Java less painful.
As for the REPL? OK, you got me there โ currently, anyhow. Java 9 (out next year) will include JShell for all your REPL needs.
Drumroll, please
Which language should you use for your big data project? Iโm afraid Iโm going to take the cowardโs way out and come down firmly on the side of โit depends.โ If youโre doing heavy data analysis with obscure statistical calculations, then youโd be crazy not to favor R. If youโre doing NLP or intensive neural network processing across GPUs, then Python is a good bet. And for a hardened, production streaming solution with all the important operational tooling, Java or Scala are definitely great choices.
Of course, it doesnโt have to be either/or. For example, with Spark, you can train your model and machine learning pipeline with R or Python with data at rest, then serialize that pipeline out to storage, where it can be used by your production Scala Spark Streaming application. While you shouldnโt go overboard (your team will quickly suffer language fatigue otherwise), using a heterogeneous set of languages that play to particular strengths can bring dividends to a big data project.


