A command-line tool for launching Apache Spark clusters.
Flintrock is a command-line tool for launching Apache Spark clusters.
Though Flintrock hasn't made a 1.0 release yet, it's fairly stable. Expect some minor but nonetheless backwards incompatible changes as Flintrock reaches formal stability via a 1.0 release.
Flintrock around the web
Flintrock has been featured in a few talks, guides, and papers around the web.
- Running Spark on a Cluster: The Basics (using Flintrock)
- Spark with Jupyter on AWS
- Building a data science platform for R&D, part 2 – Deploying Spark on AWS using Flintrock
- AWS EC2를 활용 스파크 클러스터 생성
flintrock launch test-cluster \ --num-slaves 1 \ --spark-version 2.4.0 \ --ec2-key-name key_name \ --ec2-identity-file /path/to/key.pem \ --ec2-ami ami-0b8d0d6ac70e5750c \ --ec2-user ec2-user
If you persist these options to a file, you'll be able to do the same thing much more concisely:
flintrock configure # Save your preferences via the opened editor, then... flintrock launch test-cluster
Once you're done using a cluster, don't forget to destroy it with:
flintrock destroy test-cluster
Other things you can do with Flintrock include:
flintrock login test-cluster flintrock describe test-cluster flintrock add-slaves test-cluster --num-slaves 2 flintrock remove-slaves test-cluster --num-slaves 1 flintrock run-command test-cluster 'sudo yum install -y package' flintrock copy-file test-cluster /local/path /remote/path
To see what else Flintrock can do, or to see detailed help for a specific command, try:
flintrock --help flintrock <subcommand> --help
That's not all. Flintrock has a few more features that you may find interesting.
Accessing data on S3
We recommend you access data on S3 from your Flintrock cluster by following these steps:
- Setup an IAM Role
that grants access to S3 as desired. Reference this role when you launch
your cluster using the
--ec2-instance-profile-nameoption (or its equivalent in your
- Reference S3 paths in your Spark code using the
s3a://is backwards compatible with
s3n://and replaces both
s3://. The Hadoop project recommends using
s3a://since it is actively developed, supports larger files, and offers better performance.
- Make sure Flintrock is configured to use Hadoop/HDFS 2.7+. Earlier
versions of Hadoop do not have solid implementations of
s3a://. Flintrock's default is Hadoop 2.8.5, so you don't need to do anything here if you're using a vanilla configuration.
- Call Spark with the hadoop-aws package to enable
s3a://. For example:
spark-submit --packages org.apache.hadoop:hadoop-aws:2.7.6 my-app.py pyspark --packages org.apache.hadoop:hadoop-aws:2.7.6If you have issues using the package, consult the hadoop-aws troubleshooting guide and try adjusting the version. As a rule of thumb, you should match the version of hadoop-aws to the version of Hadoop that Spark was built against (which is typically Hadoop 2.7), even if the version of Hadoop that you're deploying to your Flintrock cluster is different.
With this approach you don't need to copy around your AWS credentials
or pass them into your Spark programs. As long as the assigned IAM role
allows it, Spark will be able to read and write data to S3 simply by
referencing the appropriate path (e.g.
Flintrock requires Python 3.4 or newer, unless you are using one of our standalone packages. Flintrock has been thoroughly tested only on OS X, but it should run on all POSIX systems. A motivated contributor should be able to add Windows support without too much trouble, too.
To get the latest release of Flintrock, simply run pip:
pip3 install flintrock
This will install Flintrock and place it on your path. You should be good to go now!
You'll probably want to get started with the following two commands:
flintrock --help flintrock configure
Standalone version (Python not required!)
If you don't have a recent enough version of Python, or if you don't have Python installed at all, you can still use Flintrock. We publish standalone packages of Flintrock on GitHub with our releases.
Find the standalone package for your OS under our latest release,
unzip it to a location of your choice, and run the
flintrock executable inside.
flintrock_version="0.8.0" curl --location --remote-name "https://github.com/nchammas/flintrock/releases/download/v$flintrock_version/Flintrock-$flintrock_version-standalone-OSX-x86_64.zip" unzip -q -d flintrock "Flintrock-$flintrock_version-standalone-OSX-x86_64.zip" cd flintrock/ # You're good to go! ./flintrock --help
You'll probably want to add the location of the Flintrock executable to your
PATH so that you can invoke it from any directory.
Flintrock is also available via the following package managers:
brew install flintrock
These packages are not supported by the core contributors and may be out of date. Please reach out to the relevant communities directly if you have trouble using these distributions to install Flintrock.
If you like living on the edge, install the development version of Flintrock:
pip3 install git+https://github.com/nchammas/flintrock
If you want to play around with Spark, develop a prototype application, run a one-off job, or otherwise just experiment, Flintrock is the fastest way to get you a working Spark cluster.
Flintrock exposes many options of its underlying providers (e.g. EBS-optimized volumes on EC2) which makes it easy to create a cluster with predictable performance for Spark performance testing.
Most people will use Flintrock interactively from the command line, but Flintrock is also designed to be used as part of an automated pipeline. Flintrock's exit codes are carefully chosen; it offers options to disable interactive prompts; and when appropriate it prints output in YAML, which is both human- and machine-friendly.
There are some things that Flintrock specifically does not support.
Managing permanent infrastructure
Flintrock is not for managing long-lived clusters, or any infrastructure that serves as a permanent part of some environment.
For starters, Flintrock provides no guarantee that clusters launched with one version of Flintrock can be managed by another version of Flintrock, and no considerations are made for any long-term use cases.
If you are looking for ways to manage permanent infrastructure, look at tools like Terraform, Ansible, SaltStack, or Ubuntu Juju. You might also find a service like Databricks useful if you're looking for someone else to host and manage Spark for you. Amazon also offers Spark on EMR.
Launching non-Spark-related services
Flintrock is meant for launching Spark clusters that include closely related services like HDFS, Mesos, and YARN.
Flintrock is not for launching external datasources (e.g. Cassandra), or other services that are not closely integrated with Spark (e.g. Tez).
If you are looking for an easy way to launch other services from the Hadoop ecosystem, look at the Apache Bigtop project.
Launching out-of-date services
Flintrock will always take advantage of new features of Spark and related services to make the process of launching a cluster faster, simpler, and easier to maintain. If that means dropping support for launching older versions of a service, then we will generally make that tradeoff.
Flintrock has a clean command-line interface.
flintrock --help flintrock describe flintrock destroy --help flintrock launch test-cluster --num-slaves 10
Configurable CLI Defaults
Flintrock lets you persist your desired configuration to a YAML file so that you don't have to keep typing out the same options over and over at the command line.
To setup and edit the default config file, run this:
You can also point Flintrock to a non-default config file by using the
provider: ec2 services: spark: version: 2.4.0 launch: num-slaves: 1 providers: ec2: key-name: key_name identity-file: /path/to/.ssh/key.pem instance-type: m3.medium region: us-east-1 ami: ami-0b8d0d6ac70e5750c user: ec2-user
With a config file like that, you can now launch a cluster with just this:
flintrock launch test-cluster
And if you want, you can even override individual options in your config file at the command line:
flintrock launch test-cluster \ --num-slaves 10 \ --ec2-instance-type r3.xlarge
Flintrock is really fast. This is how quickly it can launch fully operational clusters on EC2 compared to spark-ec2.
- Provider: EC2
- Instance type:
- Spark/Hadoop download source: S3
- Launch time: Best of 6 tries
|Cluster Size||Flintrock Launch Time||spark-ec2 Launch Time|
|1 slave||2m 06s||8m 44s|
|50 slaves||2m 30s||37m 30s|
|100 slaves||2m 42s||1h 06m 05s|
The spark-ec2 launch times are sourced from SPARK-5189.
Note that AWS performance is highly variable, so you will not get these results consistently. They show the best case scenario for each tool, and not the typical case. For Flintrock, the typical launch time will be a minute or two longer.
Advanced Storage Setup
Flintrock automatically configures any available ephemeral storage on the cluster and makes it available to installed services like HDFS and Spark. This storage is fast and is perfect for use as a temporary store by those services.
Flintrock comes with a set of automated, end-to-end tests. These tests help us develop Flintrock with confidence and guarantee a certain level of quality.
Low-level Provider Options
Flintrock exposes low-level provider options (e.g. instance-initiated shutdown behavior) so you can control the details of how your cluster is setup if you want.
No Custom Machine Image Dependencies
Flintrock is built and tested against vanilla Amazon Linux and CentOS. You can easily launch Flintrock clusters using your own custom machine images built from either of those distributions.
Support for out-of-date versions of Python, EC2 APIs, etc.
Supporting multiple versions of anything is tough. There's more surface area to cover for testing, and over the long term the maintenance burden of supporting something non-current with bug fixes and workarounds really adds up.
There are projects that support stuff across a wide cut of language or API versions. For example, Spark supports Java 7 and 8, and Python 2.6+ and 3+. The people behind these projects are gods. They take on an immense maintenance burden for the benefit and convenience of their users.
We here at project Flintrock are much more modest in our abilities. We are best able to serve the project over the long term when we limit ourselves to supporting a small but widely applicable set of configurations.
Note: The explanation here is provided from the perspective of Flintrock's original author, Nicholas Chammas.
I got started with Spark by using spark-ec2. It's one of the biggest reasons I found Spark so accessible. I didn't need to spend time upfront working through some setup guide before I could work on a "real" problem. Instead, with a simple spark-ec2 command I was able to launch a large, working cluster and get straight to business.
As I became a heavy user of spark-ec2, several limitations stood out and became an increasing pain. They provided me with the motivation for this project.
Among those limitations, the most frustrating ones were:
- Slow launches: spark-ec2 cluster launch times increase linearly with the number of slaves being created. For example, it takes spark-ec2 over an hour to launch a cluster with 100 slaves. (SPARK-4325, SPARK-5189)
- No support for configuration files: spark-ec2 does not support reading options from a config file, so users are always forced to type them in at the command line. (SPARK-925)
- Un-resizable clusters: Adding or removing slaves from an existing spark-ec2 cluster is not possible. (SPARK-2008)
- Custom machine images: spark-ec2 uses custom machine images, making it difficult for users to bring their own image. And since the process of updating those machine images is not automated, they have not been updated in years. (SPARK-3821)
I built Flintrock to address all of these shortcomings, which it does.
Why build Flintrock when we have EMR?
I started work on Flintrock months before EMR added support for Spark. It's likely that, had I considered building Flintrock a year later than I did, I would have decided against it.
Now that Flintrock exists, many users appreciate the lower cost of running Flintrock clusters as compared to EMR, as well as Flintrock's simpler interface. And for my part, I enjoy working on Flintrock in my free time.
Why didn't you build Flintrock on top of an orchestration tool?
People have asked me whether I considered building Flintrock on top of Ansible, Terraform, Docker, or something else. I looked into some of these things back when Flintrock was just an idea in my head and decided against using any of them for two basic reasons:
- Fun: I didn't have any experience with these tools, and it looked both simple enough and more fun to build something "from scratch".
- Focus: I wanted a single-purpose tool with a very limited focus, not a module or set of scripts that were part of a sprawling framework that did a lot of different things.
These are not necessarily the right reasons to build "from scratch", but they were my reasons. If you are already comfortable with any of the popular orchestration tools out there, you may find it more attractive to use them rather than add a new standalone tool to your toolchain.
About the Flintrock Logo
Release history Release notifications
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size Flintrock-0.11.0-py3-none-any.whl (52.2 kB)||File type Wheel||Python version py3||Upload date||Hashes View hashes|
|Filename, size Flintrock-0.11.0.tar.gz (54.4 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for Flintrock-0.11.0-py3-none-any.whl