Works
A Sample of My Previous Works
Senior-Friendly Web Content Transformation System
Read more

Senior-Friendly Web Content Transformation System
June 2023 - June 2025
Company Overview
Lylu GmbH develops senior-friendly software that redesigns existing internet content to make digital technology more accessible for older adults, thereby improving their quality of life.
Role: Machine Learning Engineer and Data Scientist
Solely responsible for developing and implementing an innovative software tool that transforms web pages into a specially adapted, barrier-free user interface tailored to the needs of older people.
Key Technologies
- PyTorch
- LangChain
- Google Cloud Platform (GCP)
- Microsoft Azure (Azure AI Services)
- Natural Language Processing (NLP)
- Transformers
- Vector Databases
- APIs
Key Innovations
- Advanced NLP Techniques: Implemented text embedding, similarity search, and semantic similarity for improved content understanding and transformation.
- Model Fine-Tuning: Worked on fine-tuning open-source models to better suit the specific needs of senior-friendly content adaptation.
- Contextual Understanding AI: Developed methods to autonomously transform web pages into a barrier-free interface specifically designed for older adults.
- Efficient Data Architecture: Utilized vector databases to enhance platform functionality, enabling complex web content to be converted into a simplified, structured JSON format.
Implementation Highlights
- Designed and implemented a comprehensive machine learning pipeline for web content analysis and transformation.
- Developed APIs to seamlessly integrate the transformation tool with existing web infrastructures.
- Created a scalable system capable of processing and adapting diverse web content types.
- Implemented robust error handling and performance optimization techniques to ensure reliable operation under various conditions.
Key Achievements
- Successfully developed and implemented the core technology driving Lylu GmbH's senior-friendly web transformation tool.
- Consistently exceeded performance targets and project goals, even under challenging circumstances.
- Demonstrated high levels of initiative, efficiency, and independent problem-solving throughout the project lifecycle.
- Received recognition for delivering outstanding work results and achieving project objectives.
Skills Demonstrated
- Advanced proficiency in machine learning and data science techniques
- Expertise in NLP and content transformation technologies
- Strong ability to conceptualize and implement complex AI systems
- Proficiency in working with and fine-tuning large language models
- Experience with cloud platforms (GCP) for deploying ML solutions
- Ability to integrate multiple cutting-edge technologies into a cohesive system
- Excellent analytical and problem-solving skills
- Strong communication and teamwork abilities
AI Engineer: Specialist in Advanced LLM Applications and Retrieval Systems
Read more
AI Engineer: Specialist in Advanced LLM Applications and Retrieval Systems
January 2023 - Present (Freelance)
Role Overview
As a freelance AI Engineer, I specialize in developing cutting-edge AI systems leveraging state-of-the-art Large Language Models (LLMs), advanced retrieval techniques, and custom-tailored solutions for complex information processing tasks. My work focuses on pushing the boundaries of what's possible with AI, creating systems that are not only intelligent but also efficient, scalable, and adaptable to real-world business needs.
Key Technologies and Frameworks
- Large Language Models: GPT-4, Claude, LLaMA 2, Mixtral 8x7B
- APIs: OpenAI API, Anthropic Claude API
- Python 3.9+, PyTorch 2.0+
- Hugging Face Transformers, LangChain, LlamaIndex
- Vector Databases: Pinecone, Weaviate, Milvus
- Elasticsearch for hybrid search
- Neo4j for graph-based knowledge representation
- FastAPI for backend services
- Docker and Kubernetes for deployment
- MLflow for experiment tracking and model management
- Ray for distributed computing
Major Projects
Advanced Hybrid Retrieval System with LLM Integration
Objective: Develop a state-of-the-art retrieval system combining dense and sparse retrieval methods with LLM-powered reranking.
- Implemented a hybrid search architecture using dense vector embeddings (via SentenceTransformers) and sparse BM25 retrieval.
- Developed a custom reranking module using the Mixtral 8x7B model, fine-tuned on domain-specific data for improved relevance assessment.
- Integrated a novel top-K retrieval optimization algorithm, dynamically adjusting K based on query complexity and result diversity.
- Implemented efficient caching mechanisms and query planning to reduce latency in high-throughput scenarios.
- Result: Achieved a 40% improvement in Mean Reciprocal Rank (MRR) compared to traditional retrieval methods, with sub-100ms latency for most queries.
Multi-Modal AI Assistant with LLaMA 2 and Claude Integration
Objective: Create an AI assistant capable of processing and generating multi-modal content, leveraging the strengths of different LLMs.
- Fine-tuned LLaMA 2 (70B parameter version) on a custom dataset for domain-specific knowledge and task handling.
- Integrated Claude API for advanced reasoning tasks and handling of complex, nuanced queries.
- Developed a novel prompt routing system that dynamically selects the most appropriate LLM based on query type and complexity.
- Implemented CLIP for image understanding and a custom GPT-4 Vision pipeline for image-to-text tasks.
- Created a sophisticated prompt engineering system with dynamic few-shot learning capabilities.
- Result: A versatile AI assistant capable of handling text, image, and mixed-modal inputs with high accuracy and contextual understanding.
Enterprise Knowledge Graph with LLM-Powered Query Interface
Objective: Develop a comprehensive knowledge management system combining graph databases, vector search, and LLM-based natural language understanding.
- Designed a scalable knowledge graph using Neo4j, integrating data from various enterprise sources.
- Implemented a hybrid retrieval system combining graph traversal algorithms with dense vector similarity search in Pinecone.
- Developed a custom embedding pipeline using a fine-tuned BERT model for domain-specific entity and relationship embedding.
- Created an LLM-powered natural language interface using GPT-4, allowing complex query formulation and result explanation.
- Implemented an innovative "graph-of-thoughts" reasoning approach, enabling the LLM to perform multi-hop reasoning over the knowledge graph.
- Result: Enabled natural language querying of complex enterprise knowledge, reducing time-to-insight by 60% for data analysts and decision-makers.
Technical Innovations
- Adaptive Retrieval Fusion: Developed a novel algorithm that dynamically adjusts the weighting of different retrieval methods (dense, sparse, graph-based) based on query characteristics and past performance.
- LLM Ensemble Techniques: Created a sophisticated system for combining outputs from multiple LLMs (e.g., GPT-4, Claude, LLaMA 2) using adaptive weighting and confidence scoring.
- Efficient Fine-Tuning Pipeline: Implemented a streamlined process for fine-tuning large language models using techniques like LoRA and QLoRA, significantly reducing computational requirements while maintaining performance.
- Contextual Prompt Optimization: Developed an AI-driven system that automatically generates and refines LLM prompts based on user context, query intent, and ongoing conversation flow.
- Federated Learning for LLMs: Designed a privacy-preserving method for continually improving language models using decentralized data from multiple clients, addressing data sensitivity concerns in enterprise environments.
Skills and Expertise
- Deep expertise in Large Language Models, including fine-tuning, prompt engineering, and efficient deployment strategies
- Advanced knowledge of information retrieval techniques, including vector search, hybrid retrieval, and re-ranking methodologies
- Proficiency in designing and implementing end-to-end AI pipelines, from data preprocessing to model deployment and monitoring
- Strong skills in optimization techniques for high-performance AI systems, including distributed computing and efficient resource utilization
- Experience with MLOps practices, including experiment tracking, model versioning, and automated deployment in cloud environments
- Expertise in graph algorithms and their application in knowledge retrieval and reasoning systems
- Familiarity with ethical AI principles, including bias mitigation and fairness in language models
Impact and Results
- Consistently delivered projects that exceeded client expectations, with an average satisfaction score of 4.9/5
- Reduced query response times by 70% while improving relevance scores by 35% in large-scale retrieval systems
- Published a technical paper on "Adaptive Hybrid Retrieval for Enterprise Knowledge Systems" in a top-tier AI conference
- Contributed to open-source projects, including performance optimizations for the LlamaIndex library
- Mentored 5 junior AI engineers, focusing on advanced LLM techniques and efficient system design
Continuous Learning and Development
- Completed advanced courses in "Efficient Fine-tuning of Large Language Models" and "Graph Neural Networks for Knowledge Representation"
- Regular participant and speaker at AI conferences, including NeurIPS, ICLR, and the LangChain Conference
- Active member of AI research communities, contributing to discussions on LLM advancements and ethical AI development
Advanced NLP: Optimizing Machine Learning Models for Simple Language Classification
Read more
Advanced NLP: Optimizing Machine Learning Models for Simple Language Classification
September 2022
Project Overview
This research project focused on developing and optimizing machine learning models for classifying and processing simple language in news articles. The study aimed to enhance NLP techniques specifically for simple language texts, with implications for accessibility and information retrieval.
Key Technologies and Libraries
- Python 3.8
- pandas 1.3.5 for data manipulation
- scikit-learn 1.0.2 for machine learning models and evaluation
- spaCy 3.2.0 for advanced NLP tasks
- NLTK 3.6.5 for text preprocessing
- TensorFlow 2.8.0 for deep learning models
- Gensim 4.1.2 for word embeddings
Machine Learning Models and Techniques
- Traditional ML Models:
- Logistic Regression with L2 regularization
- Random Forest Classifier (n_estimators=100, max_depth=10)
- Support Vector Machine (kernel='rbf', C=1.0)
- Gradient Boosting Classifier (n_estimators=100, learning_rate=0.1)
- Deep Learning Models:
- Convolutional Neural Network (CNN) for text classification
- Long Short-Term Memory (LSTM) network
- Bidirectional LSTM with attention mechanism
- Word Embeddings: Utilized pre-trained GloVe embeddings and trained custom Word2Vec models on the simple language corpus
Feature Engineering and Preprocessing
- TF-IDF vectorization with n-gram range (1,3)
- Custom feature extraction for simple language characteristics:
- Sentence complexity scores
- Readability metrics (Flesch-Kincaid, SMOG)
- Part-of-speech tag distributions
- Named Entity Recognition (NER) densities
- Text normalization: lowercasing, punctuation removal, lemmatization
- Stop words removal with custom list for simple language
Experimentation and Optimization
- Implemented k-fold cross-validation (k=5) for robust model evaluation
- Performed hyperparameter tuning using GridSearchCV and RandomizedSearchCV
- Developed a custom metric for simple language classification accuracy
- Employed ensemble methods, including Voting Classifier and Stacking, to improve overall performance
- Conducted ablation studies to identify most influential features for simple language classification
Key Findings and Results
- Bidirectional LSTM with attention mechanism achieved the highest accuracy (92.3%) on simple language classification
- Custom Word2Vec embeddings trained on simple language corpus outperformed pre-trained GloVe embeddings by 3.7% in classification tasks
- Ensemble of SVM, Random Forest, and BiLSTM improved overall F1-score by 2.1% compared to best individual model
- Readability metrics and custom simple language features improved model performance by 5.2% when added to traditional TF-IDF features
Impact and Practical Applications
- Accessible Content Recommendation: Developed a prototype recommendation system using the top-performing model to suggest simple language articles to users
- Automated Simplicity Checker: Created a tool that uses the trained models to assess the simplicity level of given texts and suggest improvements
- Cross-lingual Simple Language Detection: Extended the models to identify simple language content across multiple languages using multilingual word embeddings
Advanced Skills Demonstrated
- Design and implementation of complex machine learning pipelines for NLP tasks
- Proficiency in feature engineering for specialized language processing
- Advanced model selection, evaluation, and ensemble techniques
- Deep learning architecture design and optimization for text classification
- Development of custom evaluation metrics for specific NLP challenges
- Application of machine learning models to real-world accessibility problems
Optimizing Classification Performance Through Advanced Synthetic Data Generation Techniques
Read more
Optimizing Classification Performance Through Advanced Synthetic Data Generation Techniques
June 2022
Project Overview
Conducted a comprehensive study to evaluate and compare various synthetic data generation methods for enhancing classifier performance in natural language processing tasks. This research provides a systematic framework for selecting optimal data augmentation techniques, significantly reducing the need for trial-and-error approaches in machine learning pipelines.
Key Technologies and Libraries
- Python 3.8+
- scikit-learn 0.24.2 for machine learning models and evaluation metrics
- NLTK 3.6.2 for natural language processing tasks
- gensim 4.0.1 for word embeddings and linguistic transformations
- transformers 4.6.1 for state-of-the-art NLP models
- googletrans 3.1.0a0 for back translation
- matplotlib and seaborn for data visualization
- scikit-learn-extra 0.2.0 for t-SNE implementation
Datasets and Preprocessing
- Utilized three binary classification datasets with varying sample sizes and class distributions
- Implemented robust preprocessing pipeline:
- Text cleaning (removing special characters, lowercasing)
- Tokenization using NLTK's word_tokenize
- Stop word removal
- Lemmatization using WordNetLemmatizer
- Performed exploratory data analysis to understand class imbalances and text characteristics
Advanced Data Augmentation Techniques
- AEDA (Add Every Distinct Augmentation):
- Implemented custom algorithm for random punctuation insertion
- Controlled insertion rate to maintain text coherence
- EDA (Easy Data Augmentation):
- Developed modular functions for synonym replacement, random insertion, random swap, and random deletion
- Implemented adaptive augmentation rate based on sentence length
- WordNet-based Augmentation:
- Utilized NLTK's WordNet interface for synonym retrieval
- Implemented part-of-speech aware synonym replacement
- Developed word sense disambiguation mechanism to ensure contextually appropriate synonyms
- Back Translation:
- Integrated Google Translate API for multi-language translation chains
- Implemented error handling and rate limiting to manage API requests
- Experimented with various intermediate languages to maximize diversity
Machine Learning Models and Evaluation
- Feature Extraction:
- TF-IDF vectorization with n-gram range (1,2)
- Word embeddings using Word2Vec trained on the augmented corpus
- Classification Models:
- Logistic Regression with L2 regularization
- Support Vector Machine (SVM) with RBF kernel
- Random Forest Classifier
- Gradient Boosting Classifier
- Evaluation Metrics: Accuracy, Precision, Recall, F1-score, ROC-AUC
- Cross-validation: Stratified 5-fold cross-validation to ensure robust performance estimation
Advanced Visualization and Analysis
- t-SNE Visualization:
- Implemented t-SNE algorithm for dimensionality reduction of high-dimensional text features
- Optimized perplexity and learning rate parameters for each dataset
- Generated 2D and 3D visualizations to analyze class separability
- Performance Analysis:
- Developed custom scripts to aggregate results across multiple runs and augmentation methods
- Created comparative visualizations (box plots, heatmaps) to illustrate the impact of each augmentation technique
- Conducted statistical significance tests (paired t-tests) to validate improvements
Key Findings and Results
- Back Translation consistently outperformed other methods, improving F1-scores by an average of 7.3% across all datasets
- WordNet-based augmentation showed significant improvements for datasets with limited vocabulary diversity
- AEDA proved effective for datasets with formal language, improving precision by up to 5.2%
- EDA methods demonstrated balanced improvements across all metrics, with an average increase of 4.1% in overall accuracy
- t-SNE visualizations revealed improved class separability for all augmentation methods, with Back Translation showing the most distinct clusters
Impact and Practical Applications
- Developed a comprehensive guide for selecting optimal data augmentation techniques based on dataset characteristics
- Created a modular Python package for easy integration of augmentation methods into existing NLP pipelines
- Demonstrated the potential for significant performance improvements in low-resource NLP scenarios
- Findings applicable to various domains including sentiment analysis, content moderation, and document classification
Future Work
- Exploration of more advanced augmentation techniques using generative models (e.g., GPT-3 for text generation)
- Investigation of the impact of augmentation on model robustness and generalization to out-of-distribution samples
- Development of adaptive augmentation strategies that dynamically select methods based on input characteristics
- Extension of the study to multi-class and multi-label classification tasks
Advanced Skills Demonstrated
- Expertise in natural language processing and text augmentation techniques
- Proficiency in designing and implementing comprehensive machine learning experiments
- Advanced data visualization and analysis skills, including dimensionality reduction techniques
- Strong statistical analysis capabilities for validating experimental results
- Experience in developing modular, reusable code for complex NLP tasks
- Ability to synthesize findings into actionable insights for practical applications
Software Developer Intern at KPMG AG Wirtschaftsprüfungsgesellschaft
Read more
Software Developer Intern at KPMG AG Wirtschaftsprüfungsgesellschaft
January 2020 - December 2021
Company Overview
KPMG is a global network of professional firms providing Audit, Tax, and Advisory services. With over 227,000 employees across 146 countries, KPMG is one of the Big Four accounting organizations. In Germany, KPMG is a leading firm with more than 12,500 employees across 26 locations.
Role: Software Developer
Worked in the Financial Services, Tax Asset Management department at the Frankfurt am Main office. This department specializes in tax advisory services for the financial services sector, including banks, insurance companies, asset management firms, and real estate companies.
Key Responsibilities
- Development of new software applications using C#
- Enhancement and maintenance of existing C# software solutions
- Implementation of database connectivity using Entity Framework
- Collaboration with cross-functional teams to understand and implement business requirements
- Participation in the full software development lifecycle, from requirement analysis to deployment
Technical Skills Utilized
- C# programming language
- Microsoft .NET Framework
- Entity Framework for ORM (Object-Relational Mapping)
- SQL Server database management
- Visual Studio IDE
- Version control systems (e.g., Git)
- Agile development methodologies
Key Projects
- Tax Calculation Engine Enhancement: Contributed to improving the performance and accuracy of a C#-based tax calculation engine used for financial instrument analysis.
- Database Optimization: Implemented efficient database queries and optimized Entity Framework usage, resulting in a 30% improvement in data retrieval times.
- Reporting Tool Development: Assisted in creating a new reporting tool for asset management clients, integrating various data sources and providing customizable report generation capabilities.
Key Achievements
- Demonstrated strong analytical and problem-solving skills in developing software solutions for complex financial scenarios.
- Successfully integrated into the team environment, collaborating effectively with both technical and non-technical colleagues.
- Received positive feedback for the quality and reliability of work delivered, consistently meeting project deadlines.
- Showed initiative in learning about the financial services sector and its specific technological needs.
Professional Skills Demonstrated
- Strong programming skills with a focus on C# and .NET technologies
- Ability to quickly adapt to new technologies and business domains
- Excellent problem-solving and analytical thinking capabilities
- Effective communication skills in a professional, multinational environment
- Attention to detail and commitment to producing high-quality work
- Ability to work independently and as part of a team
- Time management and ability to meet deadlines in a fast-paced environment
Impact and Learning
This internship provided valuable exposure to the intersection of technology and financial services. It enhanced my understanding of how software solutions can address complex business needs in the financial sector. The experience at KPMG has significantly contributed to my professional growth, improving both my technical skills and my ability to work in a corporate environment.
KrakenBot: Advanced Cryptocurrency Trading System with Real-Time Market Analysis
Read more
KrakenBot: Advanced Cryptocurrency Trading System with Real-Time Market Analysis
April 2021 - Present
Project Overview
Developed a comprehensive, production-ready cryptocurrency trading bot that interfaces directly with the Kraken exchange API. This system combines real-time market data analysis, advanced trading strategies, and machine learning predictions to execute automated trades and provide insightful market analytics.
Key Technologies and Libraries
- Python 3.9+
- FastAPI for high-performance API development
- SQLAlchemy for database ORM
- PostgreSQL for robust data storage
- Pydantic for data validation and settings management
- Alembic for database migrations
- TA-Lib for technical analysis indicators
- TensorFlow 2.x for machine learning models
- Docker and Docker Compose for containerization
- GitHub Actions for CI/CD
System Architecture
- Modular Design: Separated concerns into distinct components (API, database, trading logic, ML predictions)
- Real-time Data Processing: Implemented websocket connections for live market data streaming
- Scalable Database Schema: Designed efficient models for storing order book data, trades, and market trends
- RESTful API: Created endpoints for bot control, data retrieval, and strategy management
- Containerized Deployment: Utilized Docker for consistent development and easy deployment
Advanced Trading Strategies
- Market Making: Implemented a sophisticated market making strategy with dynamic spread adjustments
- Technical Analysis: Incorporated various TA indicators (e.g., RSI, MACD, Bollinger Bands) for trend identification
- Order Book Analysis: Developed algorithms to analyze order book depth and liquidity
- Machine Learning Integration: Used LSTM networks for short-term price movement predictions
- Risk Management: Implemented position sizing and stop-loss mechanisms to control risk exposure
Key Features and Capabilities
- Multi-Asset Support: Capable of trading multiple cryptocurrency pairs simultaneously
- Real-time Performance Monitoring: Dashboard for live tracking of bot performance and market conditions
- Backtesting Engine: Allows for strategy testing on historical data before live deployment
- Automated Trade Execution: Executes trades based on predefined strategies and market conditions
- Dynamic Strategy Adjustment: Adapts strategies based on changing market volatility and trends
- Detailed Logging and Reporting: Comprehensive logs for auditing and performance analysis
Machine Learning Model
- Architecture: LSTM neural network for time series prediction
- Features:
- Historical price data (OHLCV)
- Technical indicators (RSI, MACD, Bollinger Bands)
- Order book imbalance
- Volume profile
- Training Process: Continuous retraining on recent market data to adapt to changing conditions
- Integration: Predictions used as additional input for trading decision logic
Notable Achievements and Results
- Successfully deployed and operated on the Kraken exchange, handling real-time trading of major cryptocurrency pairs
- Achieved consistent profitability over a 3-month testing period, outperforming buy-and-hold strategy by 12%
- Developed a robust system capable of handling high-frequency trading with low latency (avg. execution time < 100ms)
- Implemented advanced risk management, resulting in a maximum drawdown of only 5% during volatile market conditions
- Created a flexible framework allowing easy integration of new trading strategies and indicators
Challenges Overcome
- High-Frequency Data Handling: Optimized database schema and implemented efficient data processing pipelines to handle large volumes of real-time market data
- API Rate Limiting: Developed intelligent request queuing and rate limiting system to comply with exchange API restrictions
- Market Volatility: Implemented adaptive algorithms that adjust trading parameters based on detected market regimes
- System Reliability: Designed comprehensive error handling and automatic recovery mechanisms to ensure 24/7 operation
Future Enhancements
- Integration of natural language processing for sentiment analysis of news and social media
- Implementation of reinforcement learning for dynamic strategy optimization
- Expansion to support multiple cryptocurrency exchanges for cross-exchange arbitrage
- Development of a web-based user interface for easier bot configuration and monitoring
Advanced Skills Demonstrated
- Design and implementation of production-grade, high-frequency trading systems
- Proficiency in financial markets analysis and algorithmic trading strategies
- Advanced Python development with focus on performance optimization
- Experience with real-time data processing and API integration
- Containerization and microservices architecture design
- Machine learning model development and integration in financial applications
- Robust database design and optimization for high-volume data
Data Analyst at Omran Atlas Iranian Co.
Read more
Data Analyst at Omran Atlas Iranian Co.
June 2011 - May 2016 (5 years)
Company Overview
Omran Atlas Iranian Co. is a civil engineering firm based in Tehran, specializing in large-scale projects such as residential complexes and road construction. The company is known for its focus on infrastructure development and efficient project execution within the construction sector.
Role: Data Analyst
I joined Omran Atlas Iranian Co. as a data analyst, supporting civil engineering projects by leveraging data-driven insights. My work focused on developing predictive models, optimizing project resources, and implementing solutions to improve the overall efficiency and safety of engineering operations.
Career Progression
- Junior Data Analyst (2011-2012): Assisted with data collection and basic report generation for ongoing projects.
- Data Analyst (2012-2014): Developed and maintained project databases, began building predictive models for project planning.
- Senior Data Analyst (2014-2016): Led data-driven initiatives, introduced real-time dashboards, and conducted risk assessments to support project management decisions.
Key Responsibilities
- Developed predictive models using Python and SQL to enhance project timeline accuracy and budget forecasting
- Conducted risk assessments and optimized resource allocation to minimize on-site incidents and reduce costs
- Implemented real-time dashboards for project monitoring using Python
- Collaborated with engineering teams to translate project requirements into actionable data insights
- Prepared analytical reports for management and project stakeholders
- Maintained and updated project databases to ensure data integrity
- Supported the adoption of data-driven decision-making across the organization
Technical Skills Developed
- Python for data analysis and dashboard development
- SQL for database management and querying
- Data visualization with libraries such as Matplotlib and Seaborn
- Statistical modeling and predictive analytics
- Risk assessment methodologies
- Project management support tools
- Microsoft Excel for advanced data manipulation
Key Projects
- Predictive Modeling for Project Timelines: Developed models that improved the accuracy of project completion estimates, helping to reduce delays by 20%.
- Resource Optimization Initiative: Conducted analyses that led to a 10% reduction in costs and fewer on-site incidents through better resource allocation.
- Real-Time Dashboard Implementation: Built interactive dashboards in Python to provide live project status updates, enhancing decision-making for project managers.
Key Achievements
- Introduced predictive analytics to project planning, resulting in more reliable timelines and budgets.
- Reduced project delays by 20% through data-driven scheduling improvements.
- Cut project costs by 10% by optimizing resource allocation and reducing incidents.
- Developed real-time dashboards that improved project transparency and responsiveness.
Professional Skills Developed
- Proficiency in Python and SQL for data analysis
- Ability to design and implement predictive models for real-world engineering projects
- Experience with data visualization and dashboard creation
- Understanding of project management and risk assessment in civil engineering
- Effective communication of technical findings to non-technical stakeholders
- Collaboration with multidisciplinary teams
- Time management and multitasking across several ongoing projects
Impact and Growth
My tenure at Omran Atlas Iranian Co. provided me with a solid foundation in data analytics within the civil engineering sector. By integrating predictive modeling and real-time data solutions into project management, I helped drive significant improvements in efficiency, safety, and cost control. This role was instrumental in developing my technical and analytical abilities, as well as my capacity to contribute meaningfully to large-scale engineering projects.