DataOps Engineer IRestoration of America
The DataOps Engineer I at Restoration of America is a junior-to-mid-level role dedicated to streamlining the data lifecycle and ensuring the seamless delivery of high-quality data. Working closely with senior leadership, you will automate workflows, monitor pipeline health, and maintain robust database environments. This role is ideal for a mission-aligned engineer who is passionate about process optimization, reliability, and technical growth within a modern stack of PostgreSQL, dbt, and Airflow.
Assist with the performance monitoring and routine maintenance of PostgreSQL databases to ensure high availability and security.
Automate and monitor the deployment of dbt models and Airflow DAGs.
Manage tasks through SDLC project management application and document engineering standards to ensure transparent and reliable operations.
Utilize Git for version control and participate in technical planning sessions and production support rotations.
Monitor data quality standards and assist in implementing security and access controls across the data platform.
Required Qualifications:
1–3 years of hands-on experience in a data-centric role (Data Engineering, Database Administration, or Backend Engineering).
Solid understanding of PostgreSQL fundamentals, including writing complex queries, understanding schema design, and basic performance tuning (indexing).
Exposure to modern data transformation tools (dbt) and orchestration (Airflow, Dagster, or Prefect).
Strong ability to write clean, performant SQL.
Familiarity with automated testing frameworks (e.g., dbt tests, Great Expectations).
Proficiency with Git and a solid understanding of branching and pull request workflows.
Deep commitment to the organization’s mission and values, with a desire to apply technical skills toward these goals.
Preferred Qualifications:
Bachelor's degree in Computer Science, Data Science, or a related field (or equivalent professional experience).
Foundational experience with dbt (Core or Cloud) and familiarity with cloud data warehouses.
Basic proficiency in Python for automating data processing tasks or infrastructure tooling.
Exposure to cloud platforms (AWS, GCP, or Azure) and an interest in learning Infrastructure-as-Code tools like Terraform.
Familiarity with monitoring and observability tools such as Datadog, Grafana, or Prometheus.
Knowledge of or interest in distributed data processing and real-time streaming platforms like Kafka or Kinesis.









