hotskills.tech

HotSkills.Tech - Explore Job Opportunities

Explore Job Opportunities

Find the latest job openings from top companies around the world. Use the filters to refine your search and discover the perfect role for you.

Featured Jobs

TechCorp logo

Senior Python Developer

TechCorp

San Francisco, CA

DataCo logo

Data Scientist

DataCo

New York, NY

WebSolutions logo

Frontend Developer

WebSolutions

Remote

CloudTech logo

DevOps Engineer

CloudTech

Seattle, WA

Open Jobs

Kalamata Capital, LLC. logo

Data Scientist (Machine Learning & Pipeline Engineering)

Kalamata Capital, LLC. - Boston, MA

Citizens logo

Principal ServiceNow Developer

Citizens - Johnston, RI

General Motors logo

Principal Software Engineer, Site Reliability Engineering

General Motors - Lincoln, NE

CVS Health logo

Software Engineer - Generalist

CVS Health - Richmond, VA

Capital One logo

Senior Lead Software Engineer, Back End

Capital One - Richmond, VA

Cisco logo

Staff Software Engineer

Cisco - Richfield, OH

Jobs via Dice logo

Senior Software Engineer: Java, AWS, Agile

Jobs via Dice - Cleveland, OH

Kalamata Capital, LLC. logo

Kalamata Capital, LLC. - Boston, MA

Data Scientist (Machine Learning & Pipeline Engineering)

Job Type: Not specified

Experience: Not specified

About Job

About Us Kalamata Capital Group is a forward-thinking financial technology company committed to leveraging data-driven intelligence to support small business growth. We are seeking a highly skilled Data Scientist to develop predictive models, perform robust exploratory data analysis, and build scalable data pipelines that power key business decisions across the organization. Summary The ideal candidate is an experienced data scientist with deep technical expertise in machine learning, data engineering workflows, and statistical modeling. This role will work closely with engineering, product, and analytics teams to design, validate, and deploy ML solutions that improve decision-making efficiency. Strong proficiency in Pandas, PySpark, and MongoDB is essential, along with the ability to write clean, reproducible, production-ready code. The successful candidate will be equally comfortable communicating complex analytical insights to non-technical stakeholders. Key Responsibilities • Exploratory Analysis & Data Profiling: Conduct EDA on large, complex datasets using Pandas and PySpark; assess data quality and structure. • Model Development: Build, tune, and evaluate supervised and unsupervised machine learning models (e.g., tree-based methods, regressions, boosting algorithms). • Pipeline Engineering: Design and implement reliable, maintainable machine learning pipelines and preprocessing workflows for production environments. • Data Management: Query and integrate MongoDB datasets; design efficient schemas and aggregation pipelines that support analytical and operational workloads. • Visualization: Create intuitive visualizations using seaborn, plotly, and matplotlib to support model diagnostics and business storytelling. • Reproducible Code: Write clean, modular, well-documented Python code (PEP8 compliant); maintain version control using Git. • Model Explainability: Apply model interpretation tools such as SHAP and LIME to evaluate feature impact and improve transparency. • Cross-Functional Collaboration: Partner with engineering, analytics, and product teams to translate business needs into actionable model-driven solutions. • Documentation: Produce clear technical memos, reports, and model documentation for internal stakeholders. Required Skills & Qualifications • Education & Experience: • M.S. in Computer Science, Machine Learning, Computational Biology, or related quantitative field plus 3+ years of relevant experience, or equivalent combination of education and applied work. • Strong foundation in Linear Algebra, Probability, and Statistics. Technical Expertise: • Advanced proficiency with Pandas and PySpark for data cleaning, reshaping, merges, feature engineering, and workflow optimization. • Strong experience with MongoDB, including querying, indexing, and aggregation pipelines. • Deep knowledge of supervised/unsupervised ML techniques and tools (scikit-learn, XGBoost). • Solid understanding of optimization, regularization, loss functions, and evaluation metrics (AUC, precision, recall, RMSE). Core Skills: Experience delivering end-to-end ML projects (data ingestion modeling evaluation • optional deployment). • Ability to write clean, reproducible code and maintain organized notebooks/scripts. • Excellent communication skills with the ability to translate analysis into business insights. • Ability to relocate to the New York metro area. Preferred (Bonus) Skills • Experience with AWS tools (Glue, S3, DMS). • Familiarity with deep learning frameworks (PyTorch, TensorFlow). • Experience deploying models using FastAPI, Flask, AWS, or GCP. • SQL, data warehousing, or data versioning experience. • Software engineering best practices (testing, CI/CD, code review). • Link to GitHub, GitLab, or portfolio of analytical/ML code. Flexible work from home options available. job perk: Posted At: 1 day ago, job perk: Schedule Type: Full-time

Enhance Your Job Prospects

  • Access exclusive job listings in specialized industries.
  • Optimize your CV with our AI-powered tools.
  • Define salary and equity expectations from the start.
  • Find the perfect fit with personalized job filters.
  • Get top founders and recruiters reaching out to you.

Showing 484 to 490 of 2430 results