AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Which stage in MLOps involves hyperparameter tuning? Monitoring Model training & optimization Deployment Incident management 2 / 49 What is the role of continuous validation in MLOps Ensures deployed models remain accurate and reliable with new data Improves GPU performance Tracks Git commits Reduces network traffic 3 / 49 In MLOps, what is 'model drift'? When the model is moved between servers When models crash during deployment When hyperparameters remain constant When model performance degrades due to changes in data patterns 4 / 49 Which cloud service provides a fully managed ML pipeline solution? AWS SageMaker Pipelines Photoshop Cloud Kubernetes without ML VMware vSphere 5 / 49 What is blue-green deployment in ML pipelines? Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only Using two ML algorithms simultaneously Splitting training datasets randomly 6 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization Training ML models Visualizing anomalies 7 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Tableau only Slack PowerPoint Prometheus & Grafana 8 / 49 . What does a feature store provide in MLOps? A monitoring dashboard A centralized repository for storing and sharing ML features A CI/CD orchestrator A code versioning platform 9 / 49 What is a key advantage of using AIOps in incident management? Increased number of false alerts Proactive anomaly detection and root cause analysis Replacing monitoring tools entirely Manual intervention for faster resolutions 10 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Data preprocessing and feature transformations Network bandwidth Operating system drivers User interface design 11 / 49 What is the main purpose of MLOps? To build web applications To automate cloud billing processes To integrate ML models into production through CI/CD pipelines To replace software engineering practices 12 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Accuracy only Mean Squared Error Precision-Recall AUC CPU usage 13 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Jenkins Terraform Nagios 14 / 49 What is the purpose of MLflow in MLOps? Database sharding Log analysis Container orchestration Experiment tracking, model registry, and deployment 15 / 49 How does AIOps reduce 'alert fatigue? By generating more alerts By automating deployments only By correlating events and suppressing noise By disabling monitoring tools 16 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Manual log parsing Rule-based filtering Clustering algorithms 17 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Manual root cause analysis Forecasting disk failures before they occur Static capacity planning 18 / 49 What is a common challenge in automating ML pipelines? Cloud billing alerts Data versioning and reproducibility Writing HTML code Automating UI testing 19 / 49 What is the purpose of a model registry in MLOps? To track CI/CD pipeline executions To manage Kubernetes clusters To store, version, and manage trained ML models To store cloud infrastructure templates 20 / 49 Which monitoring metric is MOST relevant in MLOps? Website traffic Number of Git commits Model accuracy and drift detection CPU utilization only 21 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To reduce developer workload To reduce hardware costs To detect performance degradation and drift 22 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model training Model deployment Model monitoring Model destruction 23 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Hyperparameter optimization Visualization dashboards Anomaly detection only 24 / 49 Which of the following best describes the goal of AIOps? Replacing DevOps entirely Automating CI/CD pipelines without monitoring Automating infrastructure scaling only Applying AI/ML techniques to IT operations for proactive issue detection 25 / 49 Why is explainability important in production ML models? To reduce deployment frequency To understand model decisions and build trust with stakeholders To reduce CI/CD runtime To increase data size 26 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Measuring network latency Tracking datasets, code, and parameters that produced a model Versioning HTML files 27 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Deploying models continuously without validation Re-training models regularly with new data Scaling infrastructure on demand 28 / 49 Which challenge does AIOps primarily address? Inability to run unit tests Manual analysis of large-scale operational data Limited access to GitHub repositories Lack of cloud cost optimization 29 / 49 Which of the following ensures fairness and bias detection in ML models? Skipping validation Using random data Responsible AI practices and monitoring Relying on accuracy only 30 / 49 What is the role of Kubernetes in MLOps pipelines Scaling and orchestrating ML workloads in production Model evaluation Hyperparameter tuning only Data preprocessing 31 / 49 Which of the following ensures reproducibility in ML experiments? Manual hyperparameter tuning only Avoiding CI/CD Skipping documentation Versioning code, data, and models 32 / 49 What is online learning in ML deployment Offline retraining every month Batch scoring only Updating the model incrementally with streaming data Deploying only during office hours 33 / 49 Which of the following is a common model deployment pattern? Blue-Green Deployment Round-Robin Compilation Static Scaling Git Rebase Deployment 34 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Jenkins only Apache Airflow Excel Nagios 35 / 49 What is the main role of Docker in MLOps pipelines? To perform hyperparameter tuning To act as a monitoring dashboard To containerize ML models for consistent deployment To analyze log anomalies 36 / 49 What is 'model rollback' in CI/CD pipelines Restarting the server Resetting hyperparameters Reverting to a previous stable model when the new one fails Re-training from scratch 37 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Structured business data Video and image datasets Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys 38 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Provisioning infrastructure Creating CI/CD pipelines Training computer vision models Parsing log files and correlating incidents 39 / 49 Which is a key output of anomaly detection in AIOps? CI/CD deployment reports Application code coverage Optimized hyperparameters Identified unusual events that may indicate system issues 40 / 49 What is the difference between DevOps and MLOps? DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps replaces DevOps entirely MLOps is only about data collection DevOps is only for cloud computing 41 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Decision Trees for UI Static Regex Matching Naive Bayes only 42 / 49 What is Canary Deployment in MLOps? Deploying models without validation Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently Deploying models only in staging 43 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? MS Word Final Cut Pro Jenkins Photoshop 44 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To version-control datasets To detect server failures To optimize CI/CD runtime 45 / 49 Which of the following tools is commonly associated with AIOps? Terraform Moogsoft Apache Spark Kubernetes 46 / 49 Which of the following is an example of CI/CD for ML models? Running experiments locally only Automating retraining, testing, and deployment of models Skipping version control Manual model validation 47 / 49 What does CI/CD integration with model registry achieve? Tracks GitHub issues only Simplifies HTML rendering Automates promotion of validated models to production Improves IDE performance 48 / 49 . What is shadow deployment in MLOps? Deploying only half the model Running a new model in parallel with the current one without serving predictions to users Deploying on shadow servers only Deploying without monitoring 49 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Nagios Airflow only Splunk Kubeflow Pipelines Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X