Note: Although this is a remote position, we are only seeking candidates in European / African timezones between UTC-1 and UTC+3.
Hotjar is looking for a driven and ambitious Senior Software Engineer specializing in Big Data, to support and expand our cloud-based infrastructure used by thousands of sites around the world. The Hotjar infrastructure currently processes more than 10,000 API requests per second, delivers over a billion pieces of static content every week, and hosts databases well into terabyte-size ranges, making this an interesting and challenging opportunity.
As Hotjar continues to grow rapidly, we are seeking an engineer who has experience dealing with high traffic cloud based applications and can help Hotjar scale as our traffic multiplies and our storage requirements increase. This is an excellent career opportunity to join a fast growing remote startup in a key position.
In this position, you will:
Provide expertise in architecting, designing and implementing Big Data solutions.
Choose, deploy and manage tools and technologies to build and support a robust data infrastructure.
Be responsible for identifying bottlenecks and improving performance of all our data pipelines.
Ensure all necessary monitoring, alerting and backup solutions are in place.
Be one of the primary points of contact within the organization for data pipelines, ETL processes, and complex queries required by the product or for business intelligence purposes.
Do research and keep up to date on trends in big data processing and large scale analytics.
Implement proof of concept solutions in the form of prototype applications.
Our compensation ranges are based on market research and are equitable to other roles within Hotjar. The actual compensation offered to a successful candidate will be based on relative experience and skills. At this time we are only able to provide official employment status to those located in Malta. All other candidates will join our team as full-time consultants and will be responsible for paying any taxes or applicable fees where they reside.
4 or more years of experience working with big data and data management.
Experience using message queues or distributed logs such as Apache Kafka.
Experience using stream processing tools such as Apache Spark/Flink.
Experience using massively parallel processing engines such as Apache Impala.
Experience using unified analytics frameworks such as Apache Druid.
Experience with Python, PostgreSQL and Elasticsearch.
Experience in any of the following is considered to be an asset:
Hadoop (HBase), Redis, Git, Docker, Kubernetes.
Must submit to a background check confidentially processed by our third party.