AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 What is a key advantage of using AIOps in incident management? Proactive anomaly detection and root cause analysis Increased number of false alerts Replacing monitoring tools entirely Manual intervention for faster resolutions 2 / 49 Which of the following tools is commonly associated with AIOps? Kubernetes Apache Spark Moogsoft Terraform 3 / 49 Which is a key output of anomaly detection in AIOps? Identified unusual events that may indicate system issues Optimized hyperparameters CI/CD deployment reports Application code coverage 4 / 49 What is the role of Kubernetes in MLOps pipelines Data preprocessing Model evaluation Scaling and orchestrating ML workloads in production Hyperparameter tuning only 5 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Jenkins only Apache Airflow Nagios Excel 6 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model destruction Model training Model monitoring Model deployment 7 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Number of Git commits Model accuracy and drift detection Website traffic 8 / 49 What is 'model rollback' in CI/CD pipelines Re-training from scratch Resetting hyperparameters Restarting the server Reverting to a previous stable model when the new one fails 9 / 49 What is a common challenge in automating ML pipelines? Cloud billing alerts Data versioning and reproducibility Automating UI testing Writing HTML code 10 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Training computer vision models Creating CI/CD pipelines Parsing log files and correlating incidents Provisioning infrastructure 11 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop MS Word Final Cut Pro Jenkins 12 / 49 Which of the following best describes model governance Hyperparameter optimization Visualization dashboards Anomaly detection only Processes ensuring compliance, auditability, and security in ML models 13 / 49 How does AIOps reduce 'alert fatigue? By generating more alerts By disabling monitoring tools By automating deployments only By correlating events and suppressing noise 14 / 49 What is Canary Deployment in MLOps? Gradually rolling out a model to a subset of users before full release Deploying models without validation Deploying models only in staging Deploying multiple models in parallel permanently 15 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Forecasting disk failures before they occur Manual root cause analysis Static capacity planning 16 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To reduce hardware costs To reduce developer workload To detect performance degradation and drift 17 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Visualizing anomalies Running hyperparameter optimization Training ML models 18 / 49 Which of the following is a common model deployment pattern? Blue-Green Deployment Git Rebase Deployment Round-Robin Compilation Static Scaling 19 / 49 What is the purpose of MLflow in MLOps? Experiment tracking, model registry, and deployment Container orchestration Database sharding Log analysis 20 / 49 What does CI/CD integration with model registry achieve? Tracks GitHub issues only Automates promotion of validated models to production Improves IDE performance Simplifies HTML rendering 21 / 49 What is the main purpose of MLOps? To integrate ML models into production through CI/CD pipelines To automate cloud billing processes To replace software engineering practices To build web applications 22 / 49 In MLOps, what is 'model drift'? When model performance degrades due to changes in data patterns When models crash during deployment When hyperparameters remain constant When the model is moved between servers 23 / 49 Which of the following ensures reproducibility in ML experiments? Versioning code, data, and models Avoiding CI/CD Skipping documentation Manual hyperparameter tuning only 24 / 49 Which of the following best describes the goal of AIOps? Applying AI/ML techniques to IT operations for proactive issue detection Automating CI/CD pipelines without monitoring Automating infrastructure scaling only Replacing DevOps entirely 25 / 49 Which challenge does AIOps primarily address? Lack of cloud cost optimization Limited access to GitHub repositories Inability to run unit tests Manual analysis of large-scale operational data 26 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Nagios Jenkins Terraform 27 / 49 What is the purpose of data drift detection? To version-control datasets To detect server failures To optimize CI/CD runtime To identify changes in input data distribution affecting model performance 28 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Running experiments locally only Manual model validation Skipping version control 29 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Tableau only Prometheus & Grafana PowerPoint Slack 30 / 49 What is the role of continuous validation in MLOps Reduces network traffic Tracks Git commits Improves GPU performance Ensures deployed models remain accurate and reliable with new data 31 / 49 Why is explainability important in production ML models? To increase data size To reduce deployment frequency To understand model decisions and build trust with stakeholders To reduce CI/CD runtime 32 / 49 Which stage in MLOps involves hyperparameter tuning? Incident management Deployment Model training & optimization Monitoring 33 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Static Regex Matching Decision Trees for UI Naive Bayes only 34 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML Photoshop Cloud VMware vSphere AWS SageMaker Pipelines 35 / 49 What is online learning in ML deployment Offline retraining every month Updating the model incrementally with streaming data Deploying only during office hours Batch scoring only 36 / 49 . What does a feature store provide in MLOps? A centralized repository for storing and sharing ML features A CI/CD orchestrator A monitoring dashboard A code versioning platform 37 / 49 What is blue-green deployment in ML pipelines? Splitting training datasets randomly Running models in GPUs only Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates 38 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Rule-based filtering Clustering algorithms Manual log parsing Linear regression only 39 / 49 In MLOps, what is 'model lineage? Measuring network latency Monitoring server uptime Versioning HTML files Tracking datasets, code, and parameters that produced a model 40 / 49 What is the main role of Docker in MLOps pipelines? To act as a monitoring dashboard To analyze log anomalies To perform hyperparameter tuning To containerize ML models for consistent deployment 41 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Structured business data Video and image datasets Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys 42 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Airflow only Nagios Kubeflow Pipelines Splunk 43 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying without monitoring Deploying only half the model Running a new model in parallel with the current one without serving predictions to users 44 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Mean Squared Error Accuracy only Precision-Recall AUC CPU usage 45 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Scaling infrastructure on demand Deploying models continuously without validation Re-training models regularly with new data 46 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Skipping validation Responsible AI practices and monitoring Relying on accuracy only 47 / 49 What is the difference between DevOps and MLOps? MLOps is only about data collection DevOps is only for cloud computing MLOps replaces DevOps entirely DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring 48 / 49 What is the purpose of a model registry in MLOps? To store, version, and manage trained ML models To manage Kubernetes clusters To track CI/CD pipeline executions To store cloud infrastructure templates 49 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Data preprocessing and feature transformations Network bandwidth User interface design Operating system drivers Your score is Share this: Share on Facebook (Opens in new window) Facebook Share on X (Opens in new window) X