Loading...
USM Jobs / Senior AWS Engineer
Medium Contract

JB060864 - Senior AWS Engineer Apply

  • Start Date:
    Interview Types
  • Skills data engineers respo..
    Visa Types Green Card, US Citiz..
 
This role is for someone in Dallas, onsite 3 times per week.
 
Senior AWS Developer who also probably has some architecture experience.
Knowledgeable about the products and can help guide client in decisions.
 
As always, TEK FORMERS/CURRENTS get first preference and are preferred!!!

Location is Richardson, TX. Must be local.
 

Client: Bank of Oklahoma
Location: Richardson, TX (hybrid onsite, 3-4 days per week)
Title: Data Engineer AWS
Duration: 6 months, possible extension
 
Description
Design and develop data architecture: Create scalable, reliable, and efficient data lakehouse solutions on AWS, leveraging Apache Iceberg and other AWS services for table formats. 

Build and maintain data pipelines: Design, construct, and automate ETL/ELT processes to ingest data from diverse sources into the AWS ecosystem. 

Create and manage data APIs: Design, develop, and maintain secure and scalable RESTful and other APIs to facilitate data access for internal teams and applications, typically leveraging AWS services. 

Implement AWS services: Utilize a wide array of AWS tools for data processing, storage, and analytics, such as Amazon S3, Amazon EMR, and AWS Lake Formation, with native Iceberg support. 

Manage Iceberg tables: Build and manage Apache Iceberg tables on Amazon S3 to enable data lakehouse features like ACID transactions, time travel, and schema evolution. 

Optimize data performance: Implement partitioning strategies, data compaction, and fine-tuning techniques for Iceberg tables to enhance query performance. 

Ensure data quality and integrity: Implement data validation and error-handling processes, leveraging Iceberg\'s transactional capabilities for consistent data. 

Ensure security and compliance: Implement robust data security measures, access controls, and compliance with data protection regulations, including using AWS Lake Formation with Iceberg and implementing authorization on APIs via IAM or Cognito. 


Collaborate with stakeholders: Work closely with data scientists, analysts, software engineers, and business teams to understand their data needs and deliver effective solutions. 

Provide technical support: Offer technical expertise and troubleshooting for data-related issues related to pipelines and API endpoints. 


Maintain documentation: Create and maintain technical documentation for data workflows, processes, and API specifications.  
 
Top Skills Details
they are looking for data engineers responsible for ingesting sources into the lakehouse, standardizing, transforming, and integrating data across the architecture.
They need experienced individuals who are comfortable making recommendations and bringing their own experience to the table, rather than those who just follow requirements in an already defined environment.
Experience with a Greenfield implementation or excitement about such an opportunity is critical.
While not fixated on specific tools, a strong preference for candidates with AWS experience is mentioned, as opposed to only Azure or GCP.
Key AWS services and tools mentioned include Airflow, Lambda functions, Glue (though not a strong preference), and Iceberg as a table format. Familiarity with file formats like JSON and Avro is also noted.
The role will involve building new things and foundational work, not primarily migrating existing systems.
 
Worksite Address
333 W. Campbell Road,Richardson,Texas,United States,75080
 
Workplace Type
Hybrid
 
Additional Skills & Qualifications
Programming: Proficiency in programming languages like Python, Java, or Scala. 


SQL: Strong SQL skills for querying, data modeling, and database design. 


AWS Services: Expertise in relevant AWS services such as S3, EMR, Lambda, API Gateway, SageMaker and IAM. 


Apache Iceberg: Hands-on experience building and managing Apache Iceberg tables. 


Big Data: Experience with big data technologies like Apache Spark and Hadoop. 


API Development: Experience creating and deploying RESTful APIs, with knowledge of best practices for performance and security. 


ETL/Workflow: Experience with ETL tools and workflow orchestration tools like Apache Airflow. 

DevOps: Familiarity with DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform) is often desired.
 
 
 
Thanks,