
Python
Contact us now!
Our works
Affirmations application
Our team has developed the Affirmations application. Affirmations are one of the most powerful and effective tools in the world of psychology and coaching. They are able to change destructive attitudes and work through all aspects of the personality, without much effort. The application contains the entire database of affirmations developed by the most powerful experts in psychology on any topic:
- Love for your body;
- Raising self-esteem;
- Working on fears and stress;
- Wealth and financial flow;
- Harmonious relationships;
- The Best Day;
- Woman's magic;
- Authorship over your life;
- Flutter (MobX, Dio, Retrofit, GetIt, Injectable, Provider);
- Dart.
- Python;
- Django.
- ffmpeg
- Play streaming audio;
- Voice recording;
- Gluing and processing of audio via ffmpeg.
Virtual furniture fitting room using computer vision (CV)
Technologies: Python, OpenCV and others Adaptation of advanced segmentation and detection models for accurate recognition of furniture and other objects in images. This has significantly improved the accuracy and quality of object selection. Modern contour post-processing technologies have been applied, which have improved the detail and ensured the exact overlap of elements such as wallpaper, floors and ceilings. The use of the diffusion model made it possible to completely remove objects from the images, restoring in detail the missing voids of the room.
E-commerce store data warehouse
Technologies: Python, MySQL, JSON, Excel, Power BI, Jupyter Notebook. A system for gathering and analysis of information along with a system for supplies and sales forecast has been developed within this project. Sales, stock balance and events data is being gathered via online marketplaces Wildberries, OZON, Yandex.Market and SberMegaMarket. Data that is unavailable for online marketplaces’ API is being uploaded to the system manually by using XLS-files. A process for data cleaning has been created as well. Clients’ data analysts have been provided with a special interface for modelling and further testing using Business Analytics frameworks such as Power BI. The prediction model in this project uses ML algorithms.
Chess Zombies The Game
Technologies: Python, FastAPI, Celery, Vue.js, Unity, WebSocket Duration: in progress This project is a free, unique video game called “Chess Zombies”. It is a chess game performed in the Dark Fantasy style. The game has a complex plot with a deeply elaborated prehistory and includes three main game mechanics: Chess (Fairy Chess and Chesscraft subgenres), RPG, TBS. The game uses the F2P model, but also includes the NFT platform as an additional component. NFT expands in-game customization, but is not mandatory to fully complete the game. As part of social interactions, the game has a rating system, a clan system, and in-game chat. From the technical side of the project, we use Vue.js and Unity in front-end, Python, FastAPI, Celery in back-end and WebSocket for communication. The main goal of the project is to implement a unique multi mechanic system and create a competitive product in the video game industry.
Kodi development project
Technologies: Python, Android We created fork of the Kodi video player for specific needs of our customer and its manufacturers. Our code allows to use video streaming provided by EzServer software and changes UI of Kodi to client's style and their UX specifications. Short: developed a Kodi plugin.
Hotel price comparison solution
Technologies: Python, Selenium, Scrapy, Vue, Bayessian Modeling, Game Modeling, Statistics We developed a hotel price comparison solution for a group of hotels in Barcelona consisting of the three components: Competitors prices collection component, scraping booking prices from price aggregator websites http://hostelbookers.com/ , http://hostelworld.com/ and hotel websites. Data visualization tool showing different graphs about the current hotel prices, competitors prices, supply/demand, number of available rooms and other parameters. Mathematical model optimizing the prices for the client’s hotel basing on all the competitors, historical, seasonal data.
E-book reader
Technologies: C, Python, Calibre, PocketSphinx, Wit.ai Duration: 1 year The goal of the project is developing software for ebook reader devices based on Calibre systems. The most important feature of the system is voice recognition that is used for manipulating ebooks, executing commands, navigating libraries and searching books. Voice recognition is based on two different engines:
- for command execution, PocketSphinx is used as a grammar based engine with strict rules and good quality of voice recognition in different situations (accent, noise, low volume and so on).
- additionally, the system included free search functionality that allows to search a book using custom requests from the user, such as "Search a new book of Charles Stross" or "Search most popular books of this year"
Industrial facility monitoring application
Technologies: Python, Django, MongoDB Application monitoring the facility activities in real time and warns users of any delays. Users have ability to close, delay, reschedule and reactivate tasks. Application runs real time function to check if there are any Tasks that should be started and monitors each Task which should be started at certain time. Features:
- several access levels to the data and features;
- stores data about companies, equipment, operating personnel, all available documents;
- runs activation function to start monitoring date and time of the Task;
- watches the durations in the steps: step by step according to their sequence;
- warns users with audio alarm on computer if Task time is exceeded;
- checks if all steps are done on time and warns users that Task has been finished;
- marks Task as closed and sets corresponding End Date and Time.
QC File and Workflow Application
Technologies: Python, Django, JavaScript, React.JS, JQuery, Bootstrap, HTML, CSS, Amazon S3, MongoDB. Duration: 6 months Business goal of that project is allow to the users upload files with information about mortgage, fill the checklist about mortgage and generate and view reports (individually and monthly). The program have different types of user with different access rights. A 'customer' can upload files and view reports. Application use S3 Storage for holding files. A 'reviewer' can fill the checklist and generate reports. A 'administrator' can manage user accounts.
Video solution for athletics
Technologies: Python, OpenCV Duration: 6 months The goal of the project is to analyze the video of a tennis game for breaking match into shorter videos: one video per point. It was required to remove those parts of the match where the players did not play (the players rest, the gap between the points, etc.); that allowed game statisticians to make further revisions of the game much faster because all "idle" periods of the game were removed and the total length (as soon as file size) was much shorter. The logic of breaking video has been developed based on the analysis of game events that were detected in the video, position, speed and posture of players, ball movement and location, and other parameters. CV algorithms were used: optic flow, background subtraction, HoG detector, pose detection and others.
Reviewing legal documents using NLP techniques
Technologies: Python, NLP, Machine Learning Duration: 3 months Our client builds a financial software (named ‘CF Engine’) in order to model complex financial products (RMBS, ABS, CLO, etc.). The main goal of the project is to extend this software using feature that allows users to review the related legal documents based on the information from the model. The developed model needs to check if specific doc corresponds to one of the created models. For example, if processed document is mortgage than model: - parses mortgage document (from PDF, Word, plain text format); - checks if document contains all required information (all parties are specified and described correctly, property is described, interest rate is specified, all information required by law is provided and so on); - if document fits the model then system extracts important information (parties, property description, interest rates and so on) and provides it as summary for user review. System supports different formats of input documents and different types of documents, such as mortgages, car loans, commercial loans and so on. Also system supports different countries of operating, i.e. different structure of document for each country and different languages.
Sports prediction software
Technologies: Python, GLPK Duration: 2 months Business goal of that project was to build a tool that could create the best possible lineup of fantasy sport players given their projected fantasy points for the next game. The lineups were built for several sports like NBA, NFL and MLB and several daily fantasy sports (DFS) websites like Fanduel and DraftKings. The tool can build a list of optimal lineups based on the same set of players and their projected fantasy points. Among other options it is possible to build the best lineups with one or more players already present (locked) in the lineup. It is also possible to constrain each player's exposure in the list of optimal lineups, min and max number of players from each team in the lineup and more. Among technical issues we solved was finding a linear programming model that most correctly and efficiently describes the lineup and all corresponding constraints. The project implemented mixed integer linear programming model which was solved by a GNU Linear Programming Kit (GLPK) solver through a Python interface library Pyomo. The tool was implemented as a web service. Model parameters were populated dynamically with JSON data through a REST-like API interface by the end user and the best lineups were also sent to the user as JSON data. The linear programming model actually consisted of six models, one for each sport type (NBA, NFL, MLB) and DFS website (Fanduel, DraftKings).
Looking for correlations and making prediction in e-commerce system
Technologies: Python, Scikit-learn Duration: 5 months Implemented system that ranks customers making projections critical for business actions like upgrading from free to premium accounts, amounts of payments for premium accounts and how likely that person will stop using the service. The system is trained on a database of user account information and user series of actions made at the service. After the new user data is provided to system, a ranking based on projected future actions is produced. We used the following techniques: - feature engineering for machine learning; - random forest/decision trees models; - "bag of words" like model for user actions.
Tank Level Prediction Algorithm
Technologies: Python, Scikit, SciPy Duration: 6 months The goal of the project was to predict the level of fluid in a tank based on the data returned from the sensor connected to the tank after the strike is generated by a specific device. The data provided by the client contains information returned by sensors at different various tank levels. After applying the principle component analysis (PCA) to extract patterns from signals at the spectral domain, a simple logistic regression model was created. The model demonstrated good attributes/results and was used for prediction levels with a 98%-100% accuracy. Short: developed a predictive analytics solution for oil industry. The solution is aimed to predict the level of oil based on the sensor data. #Data Science
Japanese-Russian translation service
Technologies: Travatar, Moses, EDA, KyTea, Python Duration: 1 year The goal of the project was to create ad text translator from Japanese into Russian. For this purpose, we chose statistical translation model based on a comparison of a large body of parallel texts. They were used as a client texts received from the Q&A service (questions in Russian and translated into Japanese). We also used several statistical translators such as Travatar and Moses. Subsequently was used Travatar, since it showed the greater quality translation, based on objective metrics. One of the key challenges that we have faced is the low quality of the translation, and lack of alignment at the level of sentences. To align it we developed statistical algorithm, that worked searching the closest statements on the basis of the dictionary of n-grams.