DATA ENGINEER, DATABASE ENGINEERING - Full-time, Remote (Southern California a plus) at DATA ENGINEER, DATABASE ENGINEERING - Full-time, Remote (Southern California a plus)


743 days ago
✅ 17 applications

About the Job

DATA ENGINEER, DATABASE ENGINEERING
Full-time, Remote (Southern California a plus)

At Space and Time, we are solving Web3’s toughest data analytics challenges at planetary scale with decentralized, peer-to-peer technology. Apps built on top of Space and Time become blockchain interoperable, crunching SQL + machine learning for Gaming/DeFi data as well as any decentralized applications that need verifiable tamperproofing, blockchain-security, or enterprise scale. We turn any major blockchain into a next-gen database by connecting off-chain storage with on-chain analytic insights. Our team is growing fast, backed by some of the top blockchain orgs and VCs.

A career at Space and Time is lucrative, fast-paced, and very creative. We value you (and all your ideas) like family and we bring an endless supply of perks. This includes flexible workweeks + flexible vacation, add-on bonuses for hard work, we attend exciting events/conferences/parties, we’re headquartered on the beach near LA (but don’t mind you working remote), and most importantly- we provide analytics technology to the largest blockchain applications, DAOs, DeFi/DEXs, GameFi, NFT platforms, enterprises, etc. We are committed to growing a diverse and welcoming team in a safe space to be yourself and learn from the most innovative minds in blockchain and data warehousing. Help us invent the first decentralized supercomputer!

As a Data Engineer for our Data Platform Engineering team you will join skilled Scala / Spark engineers and core database developers responsible for developing hosted cloud analytics infrastructure (Apache Spark-based), distributed SQL processing frameworks, proprietary data science platforms, and core database optimization. This team is responsible for building the automated, intelligent, and highly performant query planner and execution engines, RPC calls between data warehouse clusters, shared secondary cold storage, etc. This includes building new SQL features and customer-facing functionality, developing novel query optimization techniques for industry-leading performance, and building a database system that's highly parallel, efficient and fault-tolerant. This is a vital role reporting to exec leadership and senior engineering leadership.

Responsibilities:
Writing Scala code with tools like Apache Spark + Apache Arrow + Apache Kafka to build a hosted, multi-cluster data warehouse for Web3
Developing database optimizers, query planners, query and data routing mechanisms, cluster-to-cluster communication, and workload management techniques
Scaling up from proof of concept to “cluster scale” (and eventually hundreds of clusters with hundreds of terabytes each), in terms of both infrastructure/architecture and problem structure
Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases to facilitate meta data capturing and management
Managing a team of software engineers writing new code to build a bigger, better, faster, more optimized HTAP database (using Apache Spark, Apache Arrow, Kafka, and a wealth of other open source data tools)
Interacting with exec team and senior engineering leadership to define, prioritize, and ensure smooth deployments with other operational components
Highly engaged with industry trends within analytics domain from a data acquisition processing, engineering, management perspective
Understand data and analytics use cases across Web3 / blockchains

Skills & Qualifications
Bachelor’s degree in computer science or related technical field. Masters or PhD a plus.
6+ years experience engineering software and data platforms / enterprise-scale data warehouses, preferably with knowledge of open source Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others)
3+ years experience with Scala and Apache Spark (or Kafka)
A track record of recruiting and leading technical teams in a demanding talent market
Rock solid engineering fundamentals; query planning, optimizing and distributed data warehouse systems experience is preferred but not required
Nice to have: Knowledge of blockchain indexing, web3 compute paradigms, Proofs and consensus mechanisms… is a strong plus but not required
Experience with rapid development cycles in a web-based environment
Strong scripting and test automation knowledge
Nice to have: Passionate about Web3, blockchain, decentralization, and a base understanding of how data/analytics plays into this

What we offer
Very competitive salaries
Medical, dental and vision insurance, disability/life insurance
401(k) Plan
Aggressive bonus structure and/or Space and Time token allocations (similar to stock options)
Very flexible PTO and paid holidays, and flexible workweek
Very flexible remote work options
A massive list of perks including discretionary add-on bonuses for hard work, attending exciting events/conferences/parties, we’re headquartered on the beach near LA (but don’t mind you working remote) and we’ll likely fly you out once a month to meet in person
Space and Time is an EOE and committed to building a diverse team


Our Commitment to Diversity and Inclusion:

At Space and Time, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Space and Time are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics


Skills

e Apache stack (especially Apache Spark, Apache Arrow, Kafka, and others) 3+ years experience with Scala and Apache Spark (or Kafka)

Compensation

200000 + Equity


Applications for this job are currently closed.

Apply on CryptoJobs