AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 . What is shadow deployment in MLOps? Deploying without monitoring Running a new model in parallel with the current one without serving predictions to users Deploying on shadow servers only Deploying only half the model 2 / 49 What is online learning in ML deployment Batch scoring only Deploying only during office hours Updating the model incrementally with streaming data Offline retraining every month 3 / 49 Which of the following best describes the goal of AIOps? Automating infrastructure scaling only Automating CI/CD pipelines without monitoring Replacing DevOps entirely Applying AI/ML techniques to IT operations for proactive issue detection 4 / 49 Which challenge does AIOps primarily address? Manual analysis of large-scale operational data Inability to run unit tests Limited access to GitHub repositories Lack of cloud cost optimization 5 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Mean Squared Error Accuracy only CPU usage Precision-Recall AUC 6 / 49 Which is a key output of anomaly detection in AIOps? CI/CD deployment reports Application code coverage Identified unusual events that may indicate system issues Optimized hyperparameters 7 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Training ML models Visualizing anomalies Running hyperparameter optimization 8 / 49 What is the purpose of data drift detection? To optimize CI/CD runtime To version-control datasets To detect server failures To identify changes in input data distribution affecting model performance 9 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Parsing log files and correlating incidents Training computer vision models Provisioning infrastructure Creating CI/CD pipelines 10 / 49 Which stage in MLOps involves hyperparameter tuning? Monitoring Model training & optimization Incident management Deployment 11 / 49 Which cloud service provides a fully managed ML pipeline solution? VMware vSphere Photoshop Cloud AWS SageMaker Pipelines Kubernetes without ML 12 / 49 In MLOps, what is 'model lineage? Tracking datasets, code, and parameters that produced a model Versioning HTML files Measuring network latency Monitoring server uptime 13 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Scaling infrastructure on demand Running unit tests for ML code Deploying models continuously without validation 14 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model training Model monitoring Model deployment Model destruction 15 / 49 What is the purpose of a model registry in MLOps? To store, version, and manage trained ML models To track CI/CD pipeline executions To manage Kubernetes clusters To store cloud infrastructure templates 16 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys Video and image datasets Structured business data 17 / 49 In MLOps, what is 'model drift'? When hyperparameters remain constant When models crash during deployment When the model is moved between servers When model performance degrades due to changes in data patterns 18 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Relying on accuracy only Responsible AI practices and monitoring Skipping validation 19 / 49 What is the main purpose of MLOps? To build web applications To replace software engineering practices To integrate ML models into production through CI/CD pipelines To automate cloud billing processes 20 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Airflow only Kubeflow Pipelines Nagios Splunk 21 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Network bandwidth Data preprocessing and feature transformations User interface design Operating system drivers 22 / 49 What is a key advantage of using AIOps in incident management? Manual intervention for faster resolutions Proactive anomaly detection and root cause analysis Increased number of false alerts Replacing monitoring tools entirely 23 / 49 Which of the following is an example of CI/CD for ML models? Running experiments locally only Skipping version control Automating retraining, testing, and deployment of models Manual model validation 24 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Manual root cause analysis Forecasting disk failures before they occur Static capacity planning 25 / 49 . What does a feature store provide in MLOps? A centralized repository for storing and sharing ML features A monitoring dashboard A code versioning platform A CI/CD orchestrator 26 / 49 What is the role of Kubernetes in MLOps pipelines Data preprocessing Hyperparameter tuning only Scaling and orchestrating ML workloads in production Model evaluation 27 / 49 What is the role of continuous validation in MLOps Tracks Git commits Improves GPU performance Reduces network traffic Ensures deployed models remain accurate and reliable with new data 28 / 49 What does CI/CD integration with model registry achieve? Simplifies HTML rendering Automates promotion of validated models to production Improves IDE performance Tracks GitHub issues only 29 / 49 What is the main role of Docker in MLOps pipelines? To act as a monitoring dashboard To containerize ML models for consistent deployment To perform hyperparameter tuning To analyze log anomalies 30 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Number of Git commits Website traffic Model accuracy and drift detection 31 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop Final Cut Pro MS Word Jenkins 32 / 49 How does AIOps reduce 'alert fatigue? By disabling monitoring tools By correlating events and suppressing noise By automating deployments only By generating more alerts 33 / 49 Which of the following is a common model deployment pattern? Git Rebase Deployment Static Scaling Round-Robin Compilation Blue-Green Deployment 34 / 49 Which of the following best describes model governance Hyperparameter optimization Visualization dashboards Anomaly detection only Processes ensuring compliance, auditability, and security in ML models 35 / 49 What is 'model rollback' in CI/CD pipelines Reverting to a previous stable model when the new one fails Resetting hyperparameters Re-training from scratch Restarting the server 36 / 49 What is blue-green deployment in ML pipelines? Running models in GPUs only Maintaining two identical environments (blue and green) to switch traffic safely during updates Splitting training datasets randomly Using two ML algorithms simultaneously 37 / 49 Which of the following tools integrates monitoring into MLOps pipelines? PowerPoint Slack Prometheus & Grafana Tableau only 38 / 49 Which of the following ensures reproducibility in ML experiments? Manual hyperparameter tuning only Skipping documentation Avoiding CI/CD Versioning code, data, and models 39 / 49 Which of the following tools is commonly associated with AIOps? Moogsoft Terraform Kubernetes Apache Spark 40 / 49 What is the difference between DevOps and MLOps? MLOps replaces DevOps entirely MLOps is only about data collection DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring DevOps is only for cloud computing 41 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Clustering algorithms Rule-based filtering Manual log parsing 42 / 49 Why is explainability important in production ML models? To reduce CI/CD runtime To understand model decisions and build trust with stakeholders To increase data size To reduce deployment frequency 43 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Jenkins only Apache Airflow Nagios Excel 44 / 49 Which tool is widely used for managing ML pipelines? Terraform Nagios Kubeflow Jenkins 45 / 49 What is the purpose of MLflow in MLOps? Database sharding Experiment tracking, model registry, and deployment Container orchestration Log analysis 46 / 49 Which algorithm is often used in AIOps for log anomaly detection? Decision Trees for UI Naive Bayes only LSTM (Long Short-Term Memory) networks Static Regex Matching 47 / 49 What is Canary Deployment in MLOps? Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently Deploying models without validation Deploying models only in staging 48 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To detect performance degradation and drift To reduce hardware costs To reduce developer workload 49 / 49 What is a common challenge in automating ML pipelines? Data versioning and reproducibility Cloud billing alerts Automating UI testing Writing HTML code Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X