AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Why is monitoring critical after model deployment? To reduce hardware costs To speed up CI builds only To reduce developer workload To detect performance degradation and drift 2 / 49 What is Canary Deployment in MLOps? Deploying multiple models in parallel permanently Gradually rolling out a model to a subset of users before full release Deploying models without validation Deploying models only in staging 3 / 49 What is a key advantage of using AIOps in incident management? Manual intervention for faster resolutions Proactive anomaly detection and root cause analysis Increased number of false alerts Replacing monitoring tools entirely 4 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Apache Airflow Excel Jenkins only Nagios 5 / 49 Which cloud service provides a fully managed ML pipeline solution? AWS SageMaker Pipelines VMware vSphere Kubernetes without ML Photoshop Cloud 6 / 49 What is blue-green deployment in ML pipelines? Running models in GPUs only Splitting training datasets randomly Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates 7 / 49 What is the purpose of MLflow in MLOps? Database sharding Container orchestration Log analysis Experiment tracking, model registry, and deployment 8 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop Final Cut Pro Jenkins MS Word 9 / 49 What does CI/CD integration with model registry achieve? Automates promotion of validated models to production Tracks GitHub issues only Improves IDE performance Simplifies HTML rendering 10 / 49 Which stage in MLOps involves hyperparameter tuning? Model training & optimization Monitoring Incident management Deployment 11 / 49 Which of the following is an example of CI/CD for ML models? Running experiments locally only Automating retraining, testing, and deployment of models Skipping version control Manual model validation 12 / 49 Which is a key output of anomaly detection in AIOps? Optimized hyperparameters Identified unusual events that may indicate system issues Application code coverage CI/CD deployment reports 13 / 49 What is the main role of Docker in MLOps pipelines? To containerize ML models for consistent deployment To analyze log anomalies To act as a monitoring dashboard To perform hyperparameter tuning 14 / 49 What is 'model rollback' in CI/CD pipelines Restarting the server Re-training from scratch Resetting hyperparameters Reverting to a previous stable model when the new one fails 15 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying only half the model Deploying without monitoring Running a new model in parallel with the current one without serving predictions to users 16 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Training computer vision models Creating CI/CD pipelines Provisioning infrastructure Parsing log files and correlating incidents 17 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model deployment Model monitoring Model destruction Model training 18 / 49 In MLOps, what is 'model lineage? Tracking datasets, code, and parameters that produced a model Monitoring server uptime Measuring network latency Versioning HTML files 19 / 49 Which challenge does AIOps primarily address? Manual analysis of large-scale operational data Inability to run unit tests Lack of cloud cost optimization Limited access to GitHub repositories 20 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Structured business data Customer satisfaction surveys Unstructured IT operations data like logs, metrics, and traces Video and image datasets 21 / 49 Which tool is widely used for managing ML pipelines? Nagios Jenkins Terraform Kubeflow 22 / 49 Which of the following best describes the goal of AIOps? Automating CI/CD pipelines without monitoring Applying AI/ML techniques to IT operations for proactive issue detection Replacing DevOps entirely Automating infrastructure scaling only 23 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Deploying models continuously without validation Running unit tests for ML code Scaling infrastructure on demand 24 / 49 What is the purpose of data drift detection? To optimize CI/CD runtime To identify changes in input data distribution affecting model performance To detect server failures To version-control datasets 25 / 49 What is the role of Kubernetes in MLOps pipelines Data preprocessing Scaling and orchestrating ML workloads in production Hyperparameter tuning only Model evaluation 26 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Static capacity planning Manual root cause analysis Forecasting disk failures before they occur 27 / 49 Which of the following tools integrates monitoring into MLOps pipelines? PowerPoint Prometheus & Grafana Tableau only Slack 28 / 49 Which of the following ensures fairness and bias detection in ML models? Relying on accuracy only Using random data Responsible AI practices and monitoring Skipping validation 29 / 49 What is online learning in ML deployment Deploying only during office hours Updating the model incrementally with streaming data Batch scoring only Offline retraining every month 30 / 49 Which metric is best for evaluating classification models in imbalanced dataset? CPU usage Accuracy only Mean Squared Error Precision-Recall AUC 31 / 49 What is the role of continuous validation in MLOps Tracks Git commits Ensures deployed models remain accurate and reliable with new data Reduces network traffic Improves GPU performance 32 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization Training ML models Visualizing anomalies 33 / 49 Which algorithm is often used in AIOps for log anomaly detection? Static Regex Matching Naive Bayes only LSTM (Long Short-Term Memory) networks Decision Trees for UI 34 / 49 Which of the following best describes model governance Visualization dashboards Anomaly detection only Processes ensuring compliance, auditability, and security in ML models Hyperparameter optimization 35 / 49 What is the main purpose of MLOps? To build web applications To replace software engineering practices To integrate ML models into production through CI/CD pipelines To automate cloud billing processes 36 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Clustering algorithms Rule-based filtering Manual log parsing 37 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Number of Git commits Model accuracy and drift detection Website traffic 38 / 49 What is the purpose of a model registry in MLOps? To manage Kubernetes clusters To store cloud infrastructure templates To track CI/CD pipeline executions To store, version, and manage trained ML models 39 / 49 How does AIOps reduce 'alert fatigue? By correlating events and suppressing noise By disabling monitoring tools By automating deployments only By generating more alerts 40 / 49 . What does a feature store provide in MLOps? A code versioning platform A CI/CD orchestrator A centralized repository for storing and sharing ML features A monitoring dashboard 41 / 49 Which of the following is a common model deployment pattern? Static Scaling Git Rebase Deployment Round-Robin Compilation Blue-Green Deployment 42 / 49 What is the difference between DevOps and MLOps? MLOps replaces DevOps entirely DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps is only about data collection DevOps is only for cloud computing 43 / 49 Why is explainability important in production ML models? To understand model decisions and build trust with stakeholders To reduce deployment frequency To increase data size To reduce CI/CD runtime 44 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Operating system drivers User interface design Network bandwidth Data preprocessing and feature transformations 45 / 49 In MLOps, what is 'model drift'? When hyperparameters remain constant When model performance degrades due to changes in data patterns When models crash during deployment When the model is moved between servers 46 / 49 Which of the following tools is commonly associated with AIOps? Terraform Moogsoft Kubernetes Apache Spark 47 / 49 What is a common challenge in automating ML pipelines? Cloud billing alerts Automating UI testing Writing HTML code Data versioning and reproducibility 48 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Nagios Airflow only Kubeflow Pipelines Splunk 49 / 49 Which of the following ensures reproducibility in ML experiments? Avoiding CI/CD Manual hyperparameter tuning only Versioning code, data, and models Skipping documentation Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X