AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 What is the difference between DevOps and MLOps? DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps replaces DevOps entirely MLOps is only about data collection DevOps is only for cloud computing 2 / 49 What is the role of Kubernetes in MLOps pipelines Hyperparameter tuning only Data preprocessing Scaling and orchestrating ML workloads in production Model evaluation 3 / 49 Which algorithm is often used in AIOps for log anomaly detection? Naive Bayes only Static Regex Matching LSTM (Long Short-Term Memory) networks Decision Trees for UI 4 / 49 What is the role of GitOps in MLOp? Training ML models Visualizing anomalies Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization 5 / 49 Which monitoring metric is MOST relevant in MLOps? Number of Git commits Website traffic CPU utilization only Model accuracy and drift detection 6 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model training Model monitoring Model deployment Model destruction 7 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Network bandwidth Data preprocessing and feature transformations User interface design Operating system drivers 8 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Mean Squared Error CPU usage Accuracy only Precision-Recall AUC 9 / 49 Why is explainability important in production ML models? To reduce deployment frequency To increase data size To understand model decisions and build trust with stakeholders To reduce CI/CD runtime 10 / 49 What is a common challenge in automating ML pipelines? Data versioning and reproducibility Automating UI testing Cloud billing alerts Writing HTML code 11 / 49 What is the purpose of a model registry in MLOps? To track CI/CD pipeline executions To store cloud infrastructure templates To manage Kubernetes clusters To store, version, and manage trained ML models 12 / 49 What is a key advantage of using AIOps in incident management? Proactive anomaly detection and root cause analysis Replacing monitoring tools entirely Manual intervention for faster resolutions Increased number of false alerts 13 / 49 . What does a feature store provide in MLOps? A code versioning platform A monitoring dashboard A centralized repository for storing and sharing ML features A CI/CD orchestrator 14 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Deploying models continuously without validation Scaling infrastructure on demand Running unit tests for ML code 15 / 49 Which of the following ensures reproducibility in ML experiments? Avoiding CI/CD Versioning code, data, and models Skipping documentation Manual hyperparameter tuning only 16 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Rule-based filtering Manual log parsing Clustering algorithms Linear regression only 17 / 49 Which of the following best describes the goal of AIOps? Automating infrastructure scaling only Automating CI/CD pipelines without monitoring Applying AI/ML techniques to IT operations for proactive issue detection Replacing DevOps entirely 18 / 49 Which challenge does AIOps primarily address? Inability to run unit tests Lack of cloud cost optimization Limited access to GitHub repositories Manual analysis of large-scale operational data 19 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Jenkins Final Cut Pro Photoshop MS Word 20 / 49 Which of the following is a common model deployment pattern? Blue-Green Deployment Git Rebase Deployment Static Scaling Round-Robin Compilation 21 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Nagios Apache Airflow Jenkins only Excel 22 / 49 How does AIOps reduce 'alert fatigue? By automating deployments only By disabling monitoring tools By generating more alerts By correlating events and suppressing noise 23 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Nagios Terraform Jenkins 24 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To version-control datasets To optimize CI/CD runtime To detect server failures 25 / 49 Which of the following is an example of predictive analytics in AIOps? Static capacity planning Real-time log streaming Manual root cause analysis Forecasting disk failures before they occur 26 / 49 Which of the following is an example of CI/CD for ML models? Skipping version control Manual model validation Running experiments locally only Automating retraining, testing, and deployment of models 27 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Parsing log files and correlating incidents Creating CI/CD pipelines Provisioning infrastructure Training computer vision models 28 / 49 Why is monitoring critical after model deployment? To reduce hardware costs To detect performance degradation and drift To reduce developer workload To speed up CI builds only 29 / 49 What is blue-green deployment in ML pipelines? Splitting training datasets randomly Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only 30 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Video and image datasets Customer satisfaction surveys Structured business data 31 / 49 Which stage in MLOps involves hyperparameter tuning? Model training & optimization Monitoring Incident management Deployment 32 / 49 In MLOps, what is 'model drift'? When hyperparameters remain constant When model performance degrades due to changes in data patterns When models crash during deployment When the model is moved between servers 33 / 49 . What is shadow deployment in MLOps? Running a new model in parallel with the current one without serving predictions to users Deploying only half the model Deploying without monitoring Deploying on shadow servers only 34 / 49 What is the main purpose of MLOps? To build web applications To replace software engineering practices To integrate ML models into production through CI/CD pipelines To automate cloud billing processes 35 / 49 Which of the following tools is commonly associated with AIOps? Moogsoft Terraform Kubernetes Apache Spark 36 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Nagios Kubeflow Pipelines Airflow only Splunk 37 / 49 What is Canary Deployment in MLOps? Deploying models only in staging Deploying models without validation Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently 38 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Versioning HTML files Measuring network latency Tracking datasets, code, and parameters that produced a model 39 / 49 What is online learning in ML deployment Offline retraining every month Updating the model incrementally with streaming data Batch scoring only Deploying only during office hours 40 / 49 What does CI/CD integration with model registry achieve? Tracks GitHub issues only Simplifies HTML rendering Automates promotion of validated models to production Improves IDE performance 41 / 49 Which is a key output of anomaly detection in AIOps? Optimized hyperparameters Identified unusual events that may indicate system issues CI/CD deployment reports Application code coverage 42 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML Photoshop Cloud AWS SageMaker Pipelines VMware vSphere 43 / 49 What is the main role of Docker in MLOps pipelines? To perform hyperparameter tuning To analyze log anomalies To act as a monitoring dashboard To containerize ML models for consistent deployment 44 / 49 What is 'model rollback' in CI/CD pipelines Reverting to a previous stable model when the new one fails Resetting hyperparameters Restarting the server Re-training from scratch 45 / 49 What is the role of continuous validation in MLOps Tracks Git commits Ensures deployed models remain accurate and reliable with new data Reduces network traffic Improves GPU performance 46 / 49 What is the purpose of MLflow in MLOps? Container orchestration Log analysis Experiment tracking, model registry, and deployment Database sharding 47 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Prometheus & Grafana Slack Tableau only PowerPoint 48 / 49 Which of the following ensures fairness and bias detection in ML models? Responsible AI practices and monitoring Relying on accuracy only Using random data Skipping validation 49 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Anomaly detection only Hyperparameter optimization Visualization dashboards Your score is Share this: Share on Facebook (Opens in new window) Facebook Share on X (Opens in new window) X