AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 What is Canary Deployment in MLOps? Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently Deploying models without validation Deploying models only in staging 2 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Responsible AI practices and monitoring Relying on accuracy only Skipping validation 3 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization Training ML models Visualizing anomalies 4 / 49 Which of the following tools is commonly associated with AIOps? Terraform Moogsoft Apache Spark Kubernetes 5 / 49 What is the main role of Docker in MLOps pipelines? To act as a monitoring dashboard To containerize ML models for consistent deployment To analyze log anomalies To perform hyperparameter tuning 6 / 49 Which challenge does AIOps primarily address? Limited access to GitHub repositories Inability to run unit tests Manual analysis of large-scale operational data Lack of cloud cost optimization 7 / 49 Which stage in MLOps involves hyperparameter tuning? Monitoring Deployment Incident management Model training & optimization 8 / 49 Which of the following ensures reproducibility in ML experiments? Manual hyperparameter tuning only Skipping documentation Avoiding CI/CD Versioning code, data, and models 9 / 49 What is the main purpose of MLOps? To automate cloud billing processes To build web applications To integrate ML models into production through CI/CD pipelines To replace software engineering practices 10 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Structured business data Video and image datasets Customer satisfaction surveys 11 / 49 Which cloud service provides a fully managed ML pipeline solution? AWS SageMaker Pipelines Photoshop Cloud VMware vSphere Kubernetes without ML 12 / 49 Which of the following tools integrates monitoring into MLOps pipelines? PowerPoint Slack Prometheus & Grafana Tableau only 13 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Decision Trees for UI Static Regex Matching Naive Bayes only 14 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Jenkins only Apache Airflow Nagios Excel 15 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Running unit tests for ML code Deploying models continuously without validation Scaling infrastructure on demand 16 / 49 What is the purpose of MLflow in MLOps? Container orchestration Experiment tracking, model registry, and deployment Database sharding Log analysis 17 / 49 Which of the following is a common model deployment pattern? Git Rebase Deployment Round-Robin Compilation Blue-Green Deployment Static Scaling 18 / 49 What is a key advantage of using AIOps in incident management? Replacing monitoring tools entirely Increased number of false alerts Proactive anomaly detection and root cause analysis Manual intervention for faster resolutions 19 / 49 Which of the following is an example of predictive analytics in AIOps? Static capacity planning Manual root cause analysis Forecasting disk failures before they occur Real-time log streaming 20 / 49 What does CI/CD integration with model registry achieve? Improves IDE performance Simplifies HTML rendering Automates promotion of validated models to production Tracks GitHub issues only 21 / 49 What is blue-green deployment in ML pipelines? Splitting training datasets randomly Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only Using two ML algorithms simultaneously 22 / 49 What is the difference between DevOps and MLOps? MLOps replaces DevOps entirely DevOps is only for cloud computing MLOps is only about data collection DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring 23 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Nagios Splunk Kubeflow Pipelines Airflow only 24 / 49 What is the role of Kubernetes in MLOps pipelines Data preprocessing Model evaluation Scaling and orchestrating ML workloads in production Hyperparameter tuning only 25 / 49 . What does a feature store provide in MLOps? A centralized repository for storing and sharing ML features A CI/CD orchestrator A monitoring dashboard A code versioning platform 26 / 49 How does AIOps reduce 'alert fatigue? By disabling monitoring tools By correlating events and suppressing noise By automating deployments only By generating more alerts 27 / 49 What is the purpose of a model registry in MLOps? To store, version, and manage trained ML models To track CI/CD pipeline executions To manage Kubernetes clusters To store cloud infrastructure templates 28 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model training Model deployment Model monitoring Model destruction 29 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Running experiments locally only Skipping version control Manual model validation 30 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Network bandwidth Data preprocessing and feature transformations User interface design Operating system drivers 31 / 49 Why is explainability important in production ML models? To understand model decisions and build trust with stakeholders To increase data size To reduce deployment frequency To reduce CI/CD runtime 32 / 49 What is the purpose of data drift detection? To version-control datasets To identify changes in input data distribution affecting model performance To optimize CI/CD runtime To detect server failures 33 / 49 Which is a key output of anomaly detection in AIOps? CI/CD deployment reports Optimized hyperparameters Application code coverage Identified unusual events that may indicate system issues 34 / 49 In MLOps, what is 'model drift'? When model performance degrades due to changes in data patterns When hyperparameters remain constant When the model is moved between servers When models crash during deployment 35 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Hyperparameter optimization Visualization dashboards Anomaly detection only 36 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Number of Git commits Model accuracy and drift detection Website traffic 37 / 49 Which of the following best describes the goal of AIOps? Automating CI/CD pipelines without monitoring Replacing DevOps entirely Applying AI/ML techniques to IT operations for proactive issue detection Automating infrastructure scaling only 38 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Jenkins MS Word Photoshop Final Cut Pro 39 / 49 What is the role of continuous validation in MLOps Reduces network traffic Tracks Git commits Ensures deployed models remain accurate and reliable with new data Improves GPU performance 40 / 49 What is online learning in ML deployment Offline retraining every month Updating the model incrementally with streaming data Deploying only during office hours Batch scoring only 41 / 49 In MLOps, what is 'model lineage? Measuring network latency Tracking datasets, code, and parameters that produced a model Versioning HTML files Monitoring server uptime 42 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Mean Squared Error Precision-Recall AUC CPU usage Accuracy only 43 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Manual log parsing Rule-based filtering Clustering algorithms 44 / 49 What is 'model rollback' in CI/CD pipelines Reverting to a previous stable model when the new one fails Restarting the server Re-training from scratch Resetting hyperparameters 45 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying without monitoring Deploying only half the model Running a new model in parallel with the current one without serving predictions to users 46 / 49 Which tool is widely used for managing ML pipelines? Nagios Terraform Kubeflow Jenkins 47 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To reduce hardware costs To detect performance degradation and drift To reduce developer workload 48 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Parsing log files and correlating incidents Creating CI/CD pipelines Training computer vision models Provisioning infrastructure 49 / 49 What is a common challenge in automating ML pipelines? Cloud billing alerts Automating UI testing Writing HTML code Data versioning and reproducibility Your score is Share this: Share on Facebook (Opens in new window) Facebook Share on X (Opens in new window) X