This listing is closed and is no longer accepting applications.

Check out all of our similar listings.

Job Description

Join our mission-driven Data Engineering team dedicated to building a hyper-scale data lake aimed at identifying bad actors and preventing breaches. Our team designs and operates systems to centralize all data collected by our platform, making it easy for internal and external customers to transform and access data for analytics, machine learning, and threat hunting. As a Senior Engineer, you will play a key role in our diverse range of systems, from foundational data processing and storage to scalable pipelines and tools that make data accessible to other teams and systems. Key Responsibilities: Develop highly fault-tolerant Java code within Apache Spark to create platform products that allow customers to query event pipelines and gain insights into active threat trends and analytics. Design, develop, and maintain ultra-high-scale data platforms that process petabytes of data. Participate in technical reviews of our products and contribute to the development of new features and enhancement of stability. Continually improve the efficiency and reduce the latency of our high-performance services to delight our customers. Research and implement new methods for both internal stakeholders and customers to query their data efficiently and extract results in desired formats. Take ownership of our new graph database, playing a significant role in its development. Qualifications: 10+ years of combined experience in backend and data platform engineering roles. 5+ years of experience building data platform products or features with tools like Apache Spark, Flink, or Iceberg, or comparable tools in GCP. 5+ years of experience programming with Java, Scala, or Kotlin. Proven experience in end-to-end feature/product design, especially with loosely defined problem statements or specifications. Expertise in algorithms, distributed systems design, and the software development lifecycle. Experience building large-scale data/event pipelines. Proficiency in designing solutions with SQL and NoSQL databases, including Postgres, MySQL, Cassandra, and DynamoDB. Strong test-driven development discipline. Basic proficiency with Linux administration tools. Proven ability to work effectively with remote teams. Preferred Experience: Working with Pinot or other time-series/OLAP-style databases. Familiarity with Iceberg, Kubernetes, Jenkins, Parquet, and Protocol Buffers/GRPC. Perks of the Role: Remote work flexibility. Opportunity to work on impactful projects with a mission-driven team. Collaborative and inclusive work environment.

Employment Type: Full-Time

Salary: $100000.00 - 216000.00 Per Year

Education Level: No formal educational credential

Get It Recruit - Information Technology connects job seekers with employers and opportunities that match their qualifications. Job seekers can complete the full application process on the employer's website. To learn more about the position, click our 'Apply Now' button and begin the application process. This job is shared on behalf of the employer, and any questions about the position, salary, application process, or other details about the job should be directed to them.

Apply to this job

Think you're the perfect candidate?

Similar Jobs

Resource Informatics Group Inc
Dallas County, TX

ICS Global Soft INC
Boston, MA

A-Line Staffing Solutions
New York, NY

Lighthouse Professional Services
Rocky Hill, CT

Jobot
Los Angeles, CA
Posted on Jun 30
Sr. Software Engineer, Data Query Platform - Remote | WFH

New York, NY

Remote (Friendly)

100,000 - 216,000 Per Year