Apache Spark is general purpose, high performance and horizontally scalable computing framework. Due to its design it can handle massive amounts of data in near real time. It can be used both for batch and stream processing. By many analytic libraries and frameworks Apache Spark is commonly used as backend processing engine for data modeling, machine learning and other scenarios. It can seamlessly integrate with high level languages like Python or R. Despite its analytic capabilities, Spark can be used for any processing which requires high computing power and easily scalable resources.
Apache Spark is data analytics, cluster computing framework originally developed in the AMPLab at UC Berkeley. Spark fits into the Hadoop open-source community, building on top of the Hadoop Distributed File System (HDFS). However, Spark provides an easier to use alternative to Hadoop MapReduce and offers performance up to 10 times faster than previous generation systems like Hadoop MapReduce for certain applications. Spark is a framework for writing fast, distributed programs.
Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets.
Spark ecosystem contains:
- Spark SQL - a Spark module for structured data processing. It provides a programming abstraction called DataFrames and can also act as distributed SQL query engine. It enables unmodified Hadoop Hive queries to run up to 100x faster on existing deployments and data. It also provides powerful integration with the rest of the Spark ecosystem (e.g., integrating SQL query processing with machine learning)
- Spark Streaming - Many applications need the ability to process and analyze not only batch data, but also streams of new data in real-time. Running on top of Spark, Spark Streaming enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics. It readily integrates with a wide variety of popular data sources, including HDFS, Flume, Kafka, and Twitter.
- MLib - Machine learning has quickly emerged as a critical piece in mining Big Data for actionable insights. Built on top of Spark, MLlib is a scalable machine learning library that delivers both high-quality algorithms (e.g., multiple iterations to increase accuracy) and blazing speed (up to 100x faster than MapReduce). The library is usable in Java, Scala, and Python as part of Spark applications, so that you can include it in complete workflows. GraphX - a graph computation engine built on top of Spark that enables users to interactively build, transform and reason about graph structured data at scale. It comes complete with a library of common algorithms.
To make programming faster, Spark provides clean, concise APIs in Scala, Java, Python, R and SQL. Spark can be used interactively from the Scala and Python shells to rapidly query big datasets. Spark is also the engine behind Shark, a fully Apache Hive-compatible data warehousing system that can run 100x faster than Hive. It is also used as a backed processing engine for analytics platforms like Apache Mahout or H2O.
Take advantage of Spark’s distributed in-memory storage for high performance processing across a variety of use cases, including batch processing, real-time streaming, and advanced modeling and analytics. With significant performance improvements over MapReduce, Spark is the tool of choice for data scientists and analysts to turn their data into real results.
SPECIAL ACCESS CONDITIONS
Structured and unstructured data processing, machine learning, graph processing, stream processing.
SERVICE CAN BE COMBINED WITH
Apache Hive, Apache Mahout, H2O, Jupyter Notebook, Geoprocessing.