AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Manual model validation Skipping version control Running experiments locally only 2 / 49 Which is a key output of anomaly detection in AIOps? Identified unusual events that may indicate system issues Optimized hyperparameters CI/CD deployment reports Application code coverage 3 / 49 What is the main role of Docker in MLOps pipelines? To analyze log anomalies To act as a monitoring dashboard To containerize ML models for consistent deployment To perform hyperparameter tuning 4 / 49 . What is shadow deployment in MLOps? Deploying only half the model Deploying on shadow servers only Deploying without monitoring Running a new model in parallel with the current one without serving predictions to users 5 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Hyperparameter optimization Anomaly detection only Visualization dashboards 6 / 49 What is Canary Deployment in MLOps? Deploying models only in staging Deploying models without validation Deploying multiple models in parallel permanently Gradually rolling out a model to a subset of users before full release 7 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Kubeflow Pipelines Airflow only Splunk Nagios 8 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Re-training models regularly with new data Deploying models continuously without validation Scaling infrastructure on demand 9 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML VMware vSphere AWS SageMaker Pipelines Photoshop Cloud 10 / 49 Which of the following tools is commonly associated with AIOps? Kubernetes Moogsoft Apache Spark Terraform 11 / 49 What is online learning in ML deployment Batch scoring only Updating the model incrementally with streaming data Deploying only during office hours Offline retraining every month 12 / 49 What is the difference between DevOps and MLOps? DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring DevOps is only for cloud computing MLOps replaces DevOps entirely MLOps is only about data collection 13 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? MS Word Jenkins Final Cut Pro Photoshop 14 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Training computer vision models Parsing log files and correlating incidents Provisioning infrastructure Creating CI/CD pipelines 15 / 49 . What does a feature store provide in MLOps? A monitoring dashboard A code versioning platform A centralized repository for storing and sharing ML features A CI/CD orchestrator 16 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Slack Prometheus & Grafana Tableau only PowerPoint 17 / 49 What is the purpose of a model registry in MLOps? To track CI/CD pipeline executions To store, version, and manage trained ML models To store cloud infrastructure templates To manage Kubernetes clusters 18 / 49 What is the role of GitOps in MLOp? Visualizing anomalies Running hyperparameter optimization Managing ML infrastructure and deployments declaratively through Git Training ML models 19 / 49 What is the role of Kubernetes in MLOps pipelines Hyperparameter tuning only Model evaluation Scaling and orchestrating ML workloads in production Data preprocessing 20 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model destruction Model monitoring Model training Model deployment 21 / 49 What is the role of continuous validation in MLOps Reduces network traffic Improves GPU performance Ensures deployed models remain accurate and reliable with new data Tracks Git commits 22 / 49 Which metric is best for evaluating classification models in imbalanced dataset? CPU usage Precision-Recall AUC Mean Squared Error Accuracy only 23 / 49 Which of the following is an example of predictive analytics in AIOps? Forecasting disk failures before they occur Static capacity planning Real-time log streaming Manual root cause analysis 24 / 49 What is a key advantage of using AIOps in incident management? Increased number of false alerts Proactive anomaly detection and root cause analysis Replacing monitoring tools entirely Manual intervention for faster resolutions 25 / 49 Which of the following ensures reproducibility in ML experiments? Versioning code, data, and models Manual hyperparameter tuning only Skipping documentation Avoiding CI/CD 26 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Decision Trees for UI Naive Bayes only Static Regex Matching 27 / 49 Which tool is widely used for managing ML pipelines? Terraform Jenkins Kubeflow Nagios 28 / 49 Which of the following is a common model deployment pattern? Round-Robin Compilation Blue-Green Deployment Static Scaling Git Rebase Deployment 29 / 49 What is 'model rollback' in CI/CD pipelines Resetting hyperparameters Reverting to a previous stable model when the new one fails Restarting the server Re-training from scratch 30 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Jenkins only Nagios Excel Apache Airflow 31 / 49 What is the purpose of data drift detection? To optimize CI/CD runtime To version-control datasets To detect server failures To identify changes in input data distribution affecting model performance 32 / 49 What is a common challenge in automating ML pipelines? Writing HTML code Cloud billing alerts Automating UI testing Data versioning and reproducibility 33 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Customer satisfaction surveys Unstructured IT operations data like logs, metrics, and traces Structured business data Video and image datasets 34 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Skipping validation Relying on accuracy only Responsible AI practices and monitoring 35 / 49 Which monitoring metric is MOST relevant in MLOps? Model accuracy and drift detection Number of Git commits CPU utilization only Website traffic 36 / 49 How does AIOps reduce 'alert fatigue? By disabling monitoring tools By automating deployments only By correlating events and suppressing noise By generating more alerts 37 / 49 What is blue-green deployment in ML pipelines? Running models in GPUs only Using two ML algorithms simultaneously Splitting training datasets randomly Maintaining two identical environments (blue and green) to switch traffic safely during updates 38 / 49 Which of the following best describes the goal of AIOps? Replacing DevOps entirely Automating CI/CD pipelines without monitoring Applying AI/ML techniques to IT operations for proactive issue detection Automating infrastructure scaling only 39 / 49 Why is explainability important in production ML models? To reduce CI/CD runtime To reduce deployment frequency To increase data size To understand model decisions and build trust with stakeholders 40 / 49 What does CI/CD integration with model registry achieve? Automates promotion of validated models to production Simplifies HTML rendering Tracks GitHub issues only Improves IDE performance 41 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Tracking datasets, code, and parameters that produced a model Measuring network latency Versioning HTML files 42 / 49 Why is monitoring critical after model deployment? To reduce hardware costs To speed up CI builds only To detect performance degradation and drift To reduce developer workload 43 / 49 Which stage in MLOps involves hyperparameter tuning? Model training & optimization Deployment Monitoring Incident management 44 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Clustering algorithms Linear regression only Rule-based filtering Manual log parsing 45 / 49 Which challenge does AIOps primarily address? Inability to run unit tests Lack of cloud cost optimization Limited access to GitHub repositories Manual analysis of large-scale operational data 46 / 49 In MLOps, what is 'model drift'? When the model is moved between servers When models crash during deployment When model performance degrades due to changes in data patterns When hyperparameters remain constant 47 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Network bandwidth Operating system drivers User interface design Data preprocessing and feature transformations 48 / 49 What is the main purpose of MLOps? To automate cloud billing processes To replace software engineering practices To integrate ML models into production through CI/CD pipelines To build web applications 49 / 49 What is the purpose of MLflow in MLOps? Log analysis Database sharding Container orchestration Experiment tracking, model registry, and deployment Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X