Job Details
Data Engineer, Privacy
Privacy is one of the defining social issues of our time, and we have a responsibility to the people and businesses across the world who trust us with their data. Our organization is responsible to provide analytics and data insights for the design, implementation, monitoring, and maintenance of the company’s Privacy Programs. We work with our partners to ensure people’s privacy is at the center of our products and services, and that we’re complying with our regulatory obligations — all while maintaining Facebook’s core culture. We are looking for candidates who share our passion for tackling privacy complexities head-on, to help design, build, scale, and continuously improve, industry-leading privacy programs at Meta. As a Privacy Data Engineer, you will assist in building analytics infrastructure and data pipelines to provide program precision, operational scale and risk mitigation to the privacy programs. This is a partnership-heavy role and through the consulting-nature of our team, you will work with engineers to contribute to a variety of projects and technologies, depending on our partner needs. You will be responsible for scalable, reliable, high quality data products in a rapidly growing and constantly evolving space, constantly innovating and solving problems with data, helping our partners, the privacy programs, deliver processes, tools, products, infrastructure, and decisions that help us honor people’s privacy in everything we do.The ideal candidate will have a passion for adding social value and creating impact from the ground up in a fast-paced highly collaborative team-oriented environment. Additionally, you will have a proven track record of thought leadership and impact in developing similar analytics and metrics-based programs. This position is part of the Privacy Programs team.
Required Skills
Data Engineer, Privacy Responsibilities:
- Partner with leadership, software engineers, product managers, program managers and data scientists to understand data needs.
- Own a specific domain or class of data engineering challenges and leading by example.
- Influence short- and long-term strategy with cross-functional teams to drive impact.
- Design, build and launch extremely efficient and reliable data pipelines to move data across a number of platforms including Data Warehouse, online caches and real-time systems.
- Build data expertise and own data quality for allocated areas of ownership.
- Architect, Build and Launch new data models that provide intuitive analytics.
- Work with teams to establish data sources that serve teams' business needs, identifying and advocating for process improvements and industry best practices.
- Work with a variety of Meta products, applications and warehouses, transforming raw data into finished products to help drive, investigate, monitor, report and quantify the state of risk and compliance.
- Generate information and insights from data sets and identifying trends and patterns to enhance process maturity and prioritization of related business decisions.
Minumum Qualification
Minimum Qualifications:
- Experience understanding requirements, analyzing data, discovering opportunities, addressing gaps and communicating them to multiple individuals and stakeholders.
- 3+ years of Python development experience.
- 3+ years experience with Schema Design and Data Modeling.
- 3+ years of SQL experience.
- 3+ years of experience with workflow management engines (i.e. Airflow, Luigi, Prefect, Dagster, Digdag, Google Cloud Composer, AWS Step Functions, Azure Data Factory, UC4, Control-M).
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- Experience working with cloud or on-prem Big Data/MPP analytics platform (i.e. Snowflake, AWS Redshift, Google BigQuery, Azure Data Warehouse, Netezza, Teradata, or similar).
- 3+ years experience in engineering data pipelines using big data technologies (Hive, Presto, Spark, Flink etc.) on large scale datasets.
Preferred Qualification
Preferred Qualifications:
- Experience with notebook-based Data Science workflow.
- Experience with Airflow.
- Experience with data quality and validation.
- Experience with designing and implementing real-time pipelines.
- Experience with more than one coding language.
- Experience with SQL performance tuning and E2E process optimization.
- Experience querying massive datasets using Spark, Presto, Hive, Impala, etc.
- Experience with anomaly/outlier detection.
(Colorado only*) Estimated salary of $170,000/year + bonus + equity + benefits *Note: Disclosure as required by sb19-085(8-5-20)
Facebook is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Facebook is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at [Register to View]