AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Which cloud service provides a fully managed ML pipeline solution? VMware vSphere Photoshop Cloud Kubernetes without ML AWS SageMaker Pipelines 2 / 49 What does CI/CD integration with model registry achieve? Tracks GitHub issues only Automates promotion of validated models to production Simplifies HTML rendering Improves IDE performance 3 / 49 In MLOps, what is 'model drift'? When hyperparameters remain constant When model performance degrades due to changes in data patterns When the model is moved between servers When models crash during deployment 4 / 49 Which of the following is an example of CI/CD for ML models? Manual model validation Running experiments locally only Automating retraining, testing, and deployment of models Skipping version control 5 / 49 . What does a feature store provide in MLOps? A CI/CD orchestrator A centralized repository for storing and sharing ML features A code versioning platform A monitoring dashboard 6 / 49 What is the role of GitOps in MLOp? Running hyperparameter optimization Managing ML infrastructure and deployments declaratively through Git Training ML models Visualizing anomalies 7 / 49 What is 'model rollback' in CI/CD pipelines Resetting hyperparameters Reverting to a previous stable model when the new one fails Restarting the server Re-training from scratch 8 / 49 What is blue-green deployment in ML pipelines? Splitting training datasets randomly Running models in GPUs only Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates 9 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To optimize CI/CD runtime To version-control datasets To detect server failures 10 / 49 What is the purpose of MLflow in MLOps? Database sharding Container orchestration Experiment tracking, model registry, and deployment Log analysis 11 / 49 How does AIOps reduce 'alert fatigue? By automating deployments only By generating more alerts By correlating events and suppressing noise By disabling monitoring tools 12 / 49 What is the purpose of a model registry in MLOps? To store cloud infrastructure templates To track CI/CD pipeline executions To manage Kubernetes clusters To store, version, and manage trained ML models 13 / 49 Which challenge does AIOps primarily address? Inability to run unit tests Manual analysis of large-scale operational data Limited access to GitHub repositories Lack of cloud cost optimization 14 / 49 What is online learning in ML deployment Updating the model incrementally with streaming data Deploying only during office hours Batch scoring only Offline retraining every month 15 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop MS Word Final Cut Pro Jenkins 16 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys Video and image datasets Structured business data 17 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Precision-Recall AUC Accuracy only Mean Squared Error CPU usage 18 / 49 . What is shadow deployment in MLOps? Running a new model in parallel with the current one without serving predictions to users Deploying without monitoring Deploying only half the model Deploying on shadow servers only 19 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Tracking datasets, code, and parameters that produced a model Versioning HTML files Measuring network latency 20 / 49 What is the role of Kubernetes in MLOps pipelines Data preprocessing Model evaluation Hyperparameter tuning only Scaling and orchestrating ML workloads in production 21 / 49 Which stage in MLOps involves hyperparameter tuning? Incident management Monitoring Model training & optimization Deployment 22 / 49 Why is explainability important in production ML models? To understand model decisions and build trust with stakeholders To reduce deployment frequency To increase data size To reduce CI/CD runtime 23 / 49 Which of the following is an example of predictive analytics in AIOps? Static capacity planning Forecasting disk failures before they occur Manual root cause analysis Real-time log streaming 24 / 49 Why is monitoring critical after model deployment? To detect performance degradation and drift To reduce hardware costs To speed up CI builds only To reduce developer workload 25 / 49 What is the role of continuous validation in MLOps Tracks Git commits Reduces network traffic Improves GPU performance Ensures deployed models remain accurate and reliable with new data 26 / 49 What is a common challenge in automating ML pipelines? Data versioning and reproducibility Automating UI testing Writing HTML code Cloud billing alerts 27 / 49 Which of the following ensures reproducibility in ML experiments? Skipping documentation Manual hyperparameter tuning only Avoiding CI/CD Versioning code, data, and models 28 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Re-training models regularly with new data Deploying models continuously without validation Scaling infrastructure on demand 29 / 49 What is the main purpose of MLOps? To replace software engineering practices To integrate ML models into production through CI/CD pipelines To build web applications To automate cloud billing processes 30 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Jenkins Terraform Nagios 31 / 49 Which is a key output of anomaly detection in AIOps? Application code coverage Optimized hyperparameters Identified unusual events that may indicate system issues CI/CD deployment reports 32 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Provisioning infrastructure Creating CI/CD pipelines Training computer vision models Parsing log files and correlating incidents 33 / 49 Which of the following is a common model deployment pattern? Static Scaling Round-Robin Compilation Git Rebase Deployment Blue-Green Deployment 34 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Manual log parsing Linear regression only Clustering algorithms Rule-based filtering 35 / 49 Which algorithm is often used in AIOps for log anomaly detection? LSTM (Long Short-Term Memory) networks Decision Trees for UI Naive Bayes only Static Regex Matching 36 / 49 Which monitoring metric is MOST relevant in MLOps? Website traffic CPU utilization only Model accuracy and drift detection Number of Git commits 37 / 49 Which of the following tools is commonly associated with AIOps? Apache Spark Moogsoft Terraform Kubernetes 38 / 49 What is the main role of Docker in MLOps pipelines? To perform hyperparameter tuning To analyze log anomalies To act as a monitoring dashboard To containerize ML models for consistent deployment 39 / 49 What is a key advantage of using AIOps in incident management? Proactive anomaly detection and root cause analysis Increased number of false alerts Manual intervention for faster resolutions Replacing monitoring tools entirely 40 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: User interface design Operating system drivers Data preprocessing and feature transformations Network bandwidth 41 / 49 What is Canary Deployment in MLOps? Deploying models only in staging Deploying multiple models in parallel permanently Gradually rolling out a model to a subset of users before full release Deploying models without validation 42 / 49 Which of the following best describes the goal of AIOps? Replacing DevOps entirely Applying AI/ML techniques to IT operations for proactive issue detection Automating CI/CD pipelines without monitoring Automating infrastructure scaling only 43 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Skipping validation Responsible AI practices and monitoring Relying on accuracy only 44 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Splunk Kubeflow Pipelines Airflow only Nagios 45 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model deployment Model monitoring Model training Model destruction 46 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Excel Nagios Apache Airflow Jenkins only 47 / 49 What is the difference between DevOps and MLOps? MLOps replaces DevOps entirely MLOps is only about data collection DevOps is only for cloud computing DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring 48 / 49 Which of the following best describes model governance Anomaly detection only Hyperparameter optimization Processes ensuring compliance, auditability, and security in ML models Visualization dashboards 49 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Slack PowerPoint Prometheus & Grafana Tableau only Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X