Працював в 6 компаніях 6 років 6 місяців
Інтернет, IT
Software Engineer
E-Commerce Company
Інтернет
3 роки 5 місяців
09.2022 - до теперішнього часу
Created a web application for processing shoe sales orders. (Stack: Java, MySQL, HTML, CSS JQuerry, Spring (Data, Security, JPA, WEB, Boot), Hibernate)
Optimized the MySQL database (creating indexes and triggers, optimizing queries)
Created a recommendation function for a shoe sales system to create personalized recommendations based on user browsing history and order history.
Created a computer vision system for recognizing people at pedestrian crossings (python, cv2)
Created a web application for processing book sales orders (scala, akka)
Created a web application for processing clinic patients based on microservice architecture with database deployment in a container (Java, MySQL, HTML, CSS JQuerry, Spring (Data, Security, JPA, WEB, Boot), Docker, pattern orchestrator, react, js)
Created deep learning systems for analyzing data about company employees, in order to determine employee maturity (Python, pandas, ANOVA, numpy, seaborn, matplotlib.scipy, glob)
Conducted research on clustering methods for implementing a recommendation function using the collaborative filtering method in a movie viewing system (Python, pandas, sklearn (DBSCAN, silhouette_score, calinski_harabasz_score, KMeans, davies_bouldin_score, OPTICS, ST_DBSCAN), seaborn, gensim)
Created an application for reading various record formats from disk via Kafka, parsing and writing to the database (Python, Scala, Java, Spark, Kafka, Spark Streaming, MySQL)
Big Data Software Engineer
EPAM
IT
7 місяців
03.2022 - 09.2022
Responsibilities as a developer:
Created and modified of Databricks notebook;
Created and modified of pipeline for Azure Data Factory;
Created an uber JAR to run a job on Databricks;
Used Apache Spark to implement distributed processing of unstructured and semistructured data (using Java and Scala);?
Refactoring code.
Big Data Software Engineer
EPAM
IT
7 місяців
09.2021 - 03.2022
Responsibilities as a developer:
Used Apache Spark to implement distributed processing of unstructured and semistructured data (using Scala);?
Created Scala application for investigate Scala possibilities.
Junior Software Engineer
EPAM
IT
7 місяців
03.2021 - 09.2021
Responsibilities as a developer:
Implemented the eventing framework(Kafka clients) using Java technology stack;
Integrated Confluent Schema Registry into Spring Boot Aplication to work with schemas;
Implemented handlers for failed messages;
Worked with Confluent Control Center;
Created Get Started Guide for Teams;
Set up Kafka locally;
Deployed AWS DynamoDB in Docker;
Ran test scenario in Docker container with result output;
Code review.
Junior Software Engineer
EPAM
IT
6 місяців
10.2020 - 03.2021
Responsibilities as a developer:
Ran Java Spark application on Hadoop Hortonworks sandbox;
Saved and loaded data from HDFS as JSON, AVRO, CSV and Parquet file;
Saved and loaded data from Hive;
Implemented Spark Streaming application with watermarking function;
Implemented the eventing framework(Kafka clients) using Java technology stack;
Used Apache Spark to implemented distributed processing of unstructured and semi-structured data ;
Ran Java Spark application on Amazon EMR and save result at S3 bucket.
Student
EPAM
IT
1 рік 3 місяці
08.2019 - 10.2020
Responsibilities as a developer:
Created Java web-application with servlets;
Created async query with using jQuery;
Implemented custom captcha ;
Saved data with MySQL Server;
Implemented transaction manager;
Implemented localization with properties files;
Implemented user interface with jsp.
Ключова інформація
Database: MongoDB, MySQL, Data lake, Data warehouse, Relational databases, Hive, NoSQL, DynamoDB
Programming languages: Java, Scala, Python, SQL, Javascript
Development environments IntelliJ IDEA, MySQL WorkBench
Technology and knowledge: Spring, Spring Boot, Spring JPA, Spring Data, Spring Security, Hibernate, Apache Spark, JDBC, Spring Core, RESTful Web Services, Spring Web, Apache Kafka, Databricks, Insomnia, SOLID, Gradle, SonarQube, JUnit, RESTful api, Agile (Scrum), Jenkins, JSON, Spark Streaming, ACID, Git, Azure Data Lake, Storage Gen2, Zookeeper, Docker, Kubernates, Azure Data Factory, Azure, Postman, JMeter, confluent control center, Confluence, Apache Avro, Schema Registry, AWS, JSP, jQuery, Maven, Bamboo, SOAP, REST, Code Review, Software Engineering, Jira, Design Patterns, Continous Integration, Microservices, Kafka Stream, XML, Spring Streaming, UML, HTML, Unit Testing, MVC, Clean Code, Web Development, Github, Bitbucket, CSS, Web API, Object-Oriented Programming, API Integrations, HTML5, Yaml, Hadoop, Tomcat, Slack Schema registry, TCP, HTTP, ETL, Mathematics
Навчався в 3 закладах
Харківський національний університет радіоелектроніки
Computer Science
2025
Харківський національний університет радіоелектроніки
Computer Science
2023
Харківський патентно-комп'ютерний коледж
Computer Software Engineering
2020
Володіє мовами
Англійська
вище середнього
Українська
рідна
Може проходити співбесіду на цій мові
Може проходити співбесіду на цій мові
Курси, тренінги, сертифікати
Intelligent analysis in Big Data (Master)
Issuing organization: Technische Hochschule Wildau
This module gives general information about the basics of Big Data, Data Mining methods, based on Machine Learning including preprocessing, clustering, classification and regression tasks for Big Data purpose; acquiring students with knowledge and skills of practical application of those method using Python 3.
Methods for identifying parameters of design objects
Issuing organization: Technische Hochschule Wildau
In computer systems, a certain mathematical approach is used to make decisions about the projected object and the relevant optimization problems are solved. And the parameters of these objects are not always defined, therefore, the problem of parametric identification arises. The course is devoted to the study of types of control and design objects, classification, and characteristics of identification methods. The main attention is paid to the methods of parametric identification of social and economic processes using regression and introspective analysis, as well as to the study of models of comparative identification in the conditions of passive and active experiment.
Data Analysis and Visualisation (Master)
Issuing organization: Technische Hochschule Wildau
The discipline "Data Analysis and visualisation" will allow students to form scientifically based views on modern methods and technologies in the field if data analysis. It will help to understand better the basic concepts of Data Analysis, popular methods of data analysis and visualisation and learn the principles of scientific research into this field.
Technologies of high-performance computing
Issuing organization: Technische Hochschule Wildau
Provides an understanding of the basic concepts of parallel computing, which are necessary for further study of models, methods, and technologies of parallel programming.
Додаткова інформація
Комп'ютерні навички
As a Java Software Engineer I worked on several projects.The first projects were focused on Web applications (Spring, Hibernate, Mysql/MongoDB).I also have experience working in Java with a stack of Big Data technologies: Apache Spark, Kafka, Hadoop.In addition, I worked with containers in Docker, deploying services in it.I also have experience working with cloud technologies, Azure, Databricks.I am ready for interesting and difficult tasks that require analysis and research, I worked in a multicultural team, tolerant.In addition to the previously mentioned work at the university, I have experience developing mobile applications Java Android, Gradle.
Bohdan
Bohdan
Software Engineer

Київ
Готовий переїхати: Львів, Одеса, Дніпро
Активно шукає роботу
повна зайнятість
Характер роботи: віддалена робота, гібридна, в офісі/на місці
Оновлено 16 годин тому