Are you a detailed-oriented self-starter that possess a high level of technical curiosity? Are you driven to become an expert in the design and implementation of data pipelines? Are passionate to ensure the optimal software deployment for our customers’ needs? Do you want to be part of an exciting scale-up with massive upside potential? Come and join us at Spectrum Effect!
Spectrum Effect’s mission is to solve the most challenging and costly problems in the wireless industry through innovation and automation. Our team is passionate about creating disruptive technologies, developing solutions with engineering excellence, and delivering substantial value to our customers. Protected by 30 patents and deployed by leading mobile operators across the globe, our Spectrum-NET software solution performs automated ML-driven analysis of radio access networks. Spectrum-NET is a cloud-native, horizontally scalable solution based on a Kubernetes-orchestrated microservices architecture.
Our 50 team, located in San Pedro Garza García, México, enjoy ownership in our private company through stock options and very competitive salaries. This is an amazing opportunity to join an emerging leader in the ML-driven automation space and make a profound impact on the mobile industry.
As a DataOps you would be responsible for developing and maintaining nifi pipelines, overseeing day-to-day operations with ETL pipelines. You would be working with XML and CSV data auditing, and data transformation of Excel, CSV, XML, YAML, JSON files through scripting. Also, you would be creating/defining connections to our client’s endpoints such as SQL-like databases, data lakes, and other sources.
- Design and create data pipelines using Apache NiFi, Python, and Apache Kafka.
- Experience integrating different data sources for extract, transform and load (e.g. SQL like databases, data lakes, XML, CSV, etc).
- Monitor data processing steps via Kibana + Elasticsearch and alert team members to data anomalies.
- Maintain and optimize existing data pipelines to reduce inefficiencies, improve throughput and reliability, and optimize hardware resource usage.
- Automate repeated data management tasks to reduce toil.
- Provide feedback and improvement ideas to the software development data pipeline team to continually improve performance and usability.
- Provide initial troubleshooting of data processing errors by reviewing service logs, hardware alarms, DB health, and resource usage.
- Provide data processing status updates and maintain historical records of system performance.
What you need to have:
- Bachelor’s Degree in Computer Science, Engineering, or a related field.
- Apache NiFi, Apache Kafka, and Python experience.
- Kibana and Elasticsearch or other monitoring tool experience.
- Hardware monitoring experience.
- Linux shell and command line scripting experience.
- AWS Cloud experience.
- Basic Kubernetes experience.
Thinking about advancing your career to the next level? Do you have what it takes to successfully lead a software organization?
Apply now! Nothing ventured, nothing gained.