Collaborate directly with customers' data teams on projects involving data interfaces and integration.
Lead complex data extraction tasks to address business inquiries, generate ad-hoc reports, and respond to customer queries.
Execute data validation tasks, standardize and explore datasets, and troubleshoot and resolve data-related issues.
Take a significant role in data onboarding workstreams for new customers, pilots, and proof-of-concept projects.
Assume responsibility for all stages of data flow, from definition to data acquisition, and establish data set acceptance criteria.
Develop data queries and extractions to support customer success managers.
Design, create, and sustain ETL processes for various data streams, employing multiple consumption methods such as Blob, Snowflake, Kafka, SFTP, SQL connectors, and REST API.
Utilize cutting-edge technologies, including Java, Python, Spark, Scala, Airflow, Kubernetes, and Kafka.
Key Requirements:
Minimum of 3 years of hands-on experience in data operations/engineering.
Bachelor's degree in Information Systems, Statistics, Mathematics, Engineering, Economics, or a related discipline.
Demonstrated proficiency in developing and maintaining ETL/ELT, preferably using Python (DASK or equivalent); familiarity with pandas and numpy is advantageous.
Strong SQL skills for querying large and complex datasets.
Extensive experience working with SQL and NoSQL Databases such as PostgreSQL, DataLake, and Columnar DB to translate data into meaningful insights.
Proven experience in client-facing roles, with project management capabilities, adept at managing customer expectations and deadlines, and excellent communication skills.
Excellent spoken and written English communication skills.
Results-driven, pragmatic, and innovative approach.