Job Information
Pendulum Intelligence, Inc. Senior Data Scientist in Seattle, Washington
Pendulum Intelligence Inc., Seattle, WA seeks to hire Senior Data Scientists to perform the following duties: 1. Translate requirements into production-ready computer vision solutions by scoping requirements, building, comparing, evaluating and deploying state of the art machine learning/AI models, for tasks such as optical character recognition, hate symbol classification, guns detection, and more. 2. Prototype, engineer, benchmark, validate, and fine-tune deep learning models by writing productionquality code using Python, training and fine-tuning in TensorFlow and PyTorch, and rigorously assessing performance using mAP, IoU, precision--recall curves, and detailed error analysis. 3. Create, curate, analyze, and clean large datasets for labeling by defining schema and class distributions, writing comprehensive annotation instructions, training and guiding labelers, and conducting quality validation through spot checks, inter-annotator agreement analysis, and corrective feedback loops. 4. Run reproducible data-science experiments at scale with tools such as Amazon SageMaker, automate hyper-parameter sweeps, and track model metrics. 5. Apply advanced image-processing techniques (OpenCV, scikit, PIL) including contrast enhancement, sharpening, and super resolution to maximize downstream model accuracy. 6. Design and develop perception vision systems and computer vision systems, innovative methods for extracting keyframes from videos to analyze large volumes of video content efficiently. 7. Write labeling instructions and coordinate with data annotators to label curated data for training machine learning models. 8. Build and manage large, balanced datasets using SQL and Amazon S3; design annotation pipelines; and implement augmentation strategies. 9. Lead cross functional delivery: mentor engineers, coordinate with stakeholders, and turn experimental findings into roadmap decisions for the Pendulum platform. 10. Documenting experiments and findings by maintaining structured experiment logs and recording methodologies, hyperparameters, results visualizations, error analyses, and key insights to ensure reproducibility and facilitate knowledge sharing and identify challenges post data science experiments to help the engineering and product teams make decisions around prioritization, integration, and deployment of a solution. 11. Document workflows, experiments, and model performance via confluence for future reference and improvement. May telecommute. Requires a Bachelor's (or foreign educ. equiv.) Degree in Computer Engineering, Electrical Engineering, Machine Learning, Computer Science, or a related field plus two (2) years' experience in the job offered or related occupation.? Experience must include: a. Designing and developing perception vision systems and computer vision systems. b. Creating and curating datasets for labeling by defining schema and class distributions, writing comprehensive annotation instructions, training and guiding labelers, and conducting quality validation through spot checks, inter-annotator agreement analysis, and corrective feedback loops. c. Engineering, benchmarking, and validating models by writing production-quality code using Python, and C++, training and fine-tuning in TensorFlow and PyTorch, and rigorously assessing performance using mAP, IoU, precision--recall curves, and detailed error analysis. d. Documenting experiments and findings by maintaining structured experiment logs and recording methodologies, hyperparameters, results visualizations, error analyses, and key insights to ensure reproducibility and facilitate knowledge sharing. e. Leading the perception development and testing efforts. f. Building and managing large, balanced datasets using SQL and Amazon S3; designing annotation pipelines; and implementing au