AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 What is the role of Kubernetes in MLOps pipelines Model evaluation Hyperparameter tuning only Scaling and orchestrating ML workloads in production Data preprocessing 2 / 49 Which of the following ensures reproducibility in ML experiments? Manual hyperparameter tuning only Versioning code, data, and models Avoiding CI/CD Skipping documentation 3 / 49 Which algorithm is often used in AIOps for log anomaly detection? Decision Trees for UI Naive Bayes only Static Regex Matching LSTM (Long Short-Term Memory) networks 4 / 49 What is blue-green deployment in ML pipelines? Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only Splitting training datasets randomly Using two ML algorithms simultaneously 5 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To optimize CI/CD runtime To detect server failures To version-control datasets 6 / 49 What is the main purpose of MLOps? To integrate ML models into production through CI/CD pipelines To build web applications To replace software engineering practices To automate cloud billing processes 7 / 49 Which is a key output of anomaly detection in AIOps? Identified unusual events that may indicate system issues Application code coverage CI/CD deployment reports Optimized hyperparameters 8 / 49 Which of the following is a common model deployment pattern? Static Scaling Round-Robin Compilation Blue-Green Deployment Git Rebase Deployment 9 / 49 Which of the following ensures fairness and bias detection in ML models? Relying on accuracy only Using random data Skipping validation Responsible AI practices and monitoring 10 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Website traffic Model accuracy and drift detection Number of Git commits 11 / 49 What is 'model rollback' in CI/CD pipelines Re-training from scratch Resetting hyperparameters Restarting the server Reverting to a previous stable model when the new one fails 12 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Parsing log files and correlating incidents Creating CI/CD pipelines Training computer vision models Provisioning infrastructure 13 / 49 Which of the following is an example of predictive analytics in AIOps? Manual root cause analysis Real-time log streaming Forecasting disk failures before they occur Static capacity planning 14 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML AWS SageMaker Pipelines VMware vSphere Photoshop Cloud 15 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model monitoring Model destruction Model deployment Model training 16 / 49 What is the purpose of MLflow in MLOps? Database sharding Log analysis Container orchestration Experiment tracking, model registry, and deployment 17 / 49 Why is explainability important in production ML models? To reduce CI/CD runtime To increase data size To reduce deployment frequency To understand model decisions and build trust with stakeholders 18 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Airflow only Nagios Kubeflow Pipelines Splunk 19 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Prometheus & Grafana PowerPoint Tableau only Slack 20 / 49 Which challenge does AIOps primarily address? Inability to run unit tests Lack of cloud cost optimization Manual analysis of large-scale operational data Limited access to GitHub repositories 21 / 49 What is the purpose of a model registry in MLOps? To store, version, and manage trained ML models To track CI/CD pipeline executions To store cloud infrastructure templates To manage Kubernetes clusters 22 / 49 . What is shadow deployment in MLOps? Deploying only half the model Running a new model in parallel with the current one without serving predictions to users Deploying without monitoring Deploying on shadow servers only 23 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Deploying models continuously without validation Scaling infrastructure on demand Re-training models regularly with new data 24 / 49 . What does a feature store provide in MLOps? A code versioning platform A monitoring dashboard A centralized repository for storing and sharing ML features A CI/CD orchestrator 25 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Data preprocessing and feature transformations User interface design Network bandwidth Operating system drivers 26 / 49 What is the main role of Docker in MLOps pipelines? To act as a monitoring dashboard To perform hyperparameter tuning To containerize ML models for consistent deployment To analyze log anomalies 27 / 49 In MLOps, what is 'model lineage? Measuring network latency Tracking datasets, code, and parameters that produced a model Versioning HTML files Monitoring server uptime 28 / 49 Which of the following best describes the goal of AIOps? Replacing DevOps entirely Applying AI/ML techniques to IT operations for proactive issue detection Automating CI/CD pipelines without monitoring Automating infrastructure scaling only 29 / 49 What is Canary Deployment in MLOps? Deploying models without validation Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently Deploying models only in staging 30 / 49 What is the difference between DevOps and MLOps? DevOps is only for cloud computing DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps replaces DevOps entirely MLOps is only about data collection 31 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Running experiments locally only Skipping version control Manual model validation 32 / 49 What is a common challenge in automating ML pipelines? Cloud billing alerts Data versioning and reproducibility Automating UI testing Writing HTML code 33 / 49 In MLOps, what is 'model drift'? When models crash during deployment When model performance degrades due to changes in data patterns When the model is moved between servers When hyperparameters remain constant 34 / 49 What is the role of continuous validation in MLOps Ensures deployed models remain accurate and reliable with new data Tracks Git commits Reduces network traffic Improves GPU performance 35 / 49 What does CI/CD integration with model registry achieve? Improves IDE performance Tracks GitHub issues only Simplifies HTML rendering Automates promotion of validated models to production 36 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Nagios Jenkins Terraform 37 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Anomaly detection only Hyperparameter optimization Visualization dashboards 38 / 49 What is online learning in ML deployment Batch scoring only Updating the model incrementally with streaming data Deploying only during office hours Offline retraining every month 39 / 49 Which stage in MLOps involves hyperparameter tuning? Monitoring Incident management Model training & optimization Deployment 40 / 49 How does AIOps reduce 'alert fatigue? By automating deployments only By correlating events and suppressing noise By generating more alerts By disabling monitoring tools 41 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Precision-Recall AUC Mean Squared Error CPU usage Accuracy only 42 / 49 Which of the following tools is commonly associated with AIOps? Kubernetes Apache Spark Terraform Moogsoft 43 / 49 What is a key advantage of using AIOps in incident management? Increased number of false alerts Proactive anomaly detection and root cause analysis Manual intervention for faster resolutions Replacing monitoring tools entirely 44 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Clustering algorithms Linear regression only Rule-based filtering Manual log parsing 45 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Final Cut Pro Photoshop MS Word Jenkins 46 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Structured business data Customer satisfaction surveys Video and image datasets 47 / 49 What is the role of GitOps in MLOp? Visualizing anomalies Training ML models Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization 48 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To detect performance degradation and drift To reduce hardware costs To reduce developer workload 49 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Excel Nagios Jenkins only Apache Airflow Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X