BigData Engineer Location McLean, VA Duration Long Term \"Must have skills Pyspark, Kafka, Nifi, Sqoop, AWS, Python, Hive, Hadoop, SQL, Pig, Python, Spark, SQL, Pyspark, Kafka, Nifi, Sqoop, AWS Secondary Skills NoSQL, Statistical Models, Financial Models, Machine Learning Key responsibilities Responsible for delivery in the areas of big data engineering with Hadoop, Python and Spark (PySpark Nifi) and a high level understanding of machine learning Develop scalable and reliable data solutions to move data across systems from multiple sources in real time (Nifi, Kafka) as well as batch modes (Sqoop) Construct data staging layers and fast real-time systems to feed BI applications and machine learning algorithms Utilize expertise in technologies and tools, such as Python, Hadoop, Spark, AWS, as well as other cutting-edge tools and applications for Big Data Demonstrated ability to quickly learn new tools and paradigms to deploy cutting edge solutions. Develop both deployment architecture and scripts for automated system deployment in AWS Create large scale deployments using newly researched methodologies. Work in Agile environment Strong SQL skills to process large sets of data
Associated topics: data architect, data integration, data manager, data scientist, data warehouse, data warehousing, erp, mongo database, mongo database administrator, teradata
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.