Data Engineer – On-demand Transportation Company at Blockgram

Data Engineer – On-demand Transportation Company

San Francisco, California
Posted 921 days ago

Blockgram’s client is hiring 5+ Data Engineers to join their growing local team in San Francisco, California. These are all 12-month minimum contract roles with the potential for long-term growth at one of the fastest growing on-demand transportation companies in the world. We are looking for a tenacious, passionate software developer with a specific focus on data engineering to join their data engineering and business analytics team.

These Data Engineers should have exceptional SQL experience and be proficient with Python. The project is to support the massive growth in both data and in business. The focus is on platforming existing extract, transform, and load (ETL) and ELT processes from PL/SQL to the client’s standard framework of Python and ANSI compliant SQL. The current environment consists of the following: Looker, Tableau, Hadoop, Airflow, Python, SQL, Qubole etc. We are actively looking for the candidates with hands-on working experience with Python, SQL, and Oracle. Also, it is important to have experience with Data Warehousing for setting up reporting and analytical platforms with optimum performance and be well versed about Unix, Hadoop, and Hive, and of course BI/Reporting tools.

In this role, you’ll be responsible for bringing data into the platform, transforming it into a well-defined, consistent model, moving it to the best data stores to support API and analytics use cases, and making it easy for applications and consumers to access the data. If you’re eager in working with the latest emerging data-driven technology and having the opportunity of utilizing powerful business intelligence (BI) and data visualization tools, this role is definitely for you!


- BS in Computer Science, Information Systems or related discipline
- 2+ years of experience in data engineering and building large-scale data platforms, experience with Oracle
- 2+ years of experience in SQL to discover, aggregate and extract data
- 2+ years experience with Hadoop, Pig, Hive, Spark, Storm, and other BIG data technologies
- 2+ years of experience with data visualization and BI tools, such as Tableau, Kibana, and Looker
- Experience with databases such as MS SQL Server and MySQL
- Experience in coding in Python. Node.js, or other object-oriented languages is a plus
- Solid Linux and Windows administration skills, and understanding of system performance
- Strong interpersonal and communication skills, flexibility, commitment to the team, and a positive attitude
- Experience with data in the SaaS/subscription space is a plus
- Experience with Apache Beam is a plus
- Knowledge of Cassandra or other distributed data stores (Redis, MongoDB, MemCache, etc.) is a plus
- Experience building CI/CD and server/deployment automation solutions is a plus

This is a 12-month (minimum) contractor role and the chosen candidates will need to work from the headquarters in San downtown Francisco, California. Please note that we have a very generous referral program in place ( if you know any qualified candidates that would be good fits!


Apache Spark, BI/MI, Data Quality, Data Visualization, Hadoop, Hive, Pig, Python



Share this opportunity:

Post a Job Posting a job is FREE. Upgrade for $199 to get:
- Unlimited applications
- Highlighted in home page for 30 days
- Web push instant notification to subscribed candidates
- Featured in the CryptoJobs weekly newsletter
- @GetCryptoJobs tweets to hundreds of developers.

CryptoJobs is the #1 website for blockchain jobs. We believe the emergence of blockchain technology and decentralized applications will change the world as we know it, and it’s already happening. We’re on a mission to connect talented individuals to the best blockchain projects, to accelerate the advent of a more decentralized world.

Join 15,000+ crypto enthusiasts for weekly updates:

Follow CryptoJobs on:

Browse by categories

Browse by skills

Browse by top locations