AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 What does CI/CD integration with model registry achieve? Simplifies HTML rendering Tracks GitHub issues only Automates promotion of validated models to production Improves IDE performance 2 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Clustering algorithms Manual log parsing Linear regression only Rule-based filtering 3 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying without monitoring Running a new model in parallel with the current one without serving predictions to users Deploying only half the model 4 / 49 Which cloud service provides a fully managed ML pipeline solution? VMware vSphere AWS SageMaker Pipelines Kubernetes without ML Photoshop Cloud 5 / 49 How does AIOps reduce 'alert fatigue? By generating more alerts By correlating events and suppressing noise By automating deployments only By disabling monitoring tools 6 / 49 What is Canary Deployment in MLOps? Deploying models without validation Deploying models only in staging Deploying multiple models in parallel permanently Gradually rolling out a model to a subset of users before full release 7 / 49 Which of the following ensures reproducibility in ML experiments? Avoiding CI/CD Versioning code, data, and models Skipping documentation Manual hyperparameter tuning only 8 / 49 Which of the following best describes model governance Visualization dashboards Processes ensuring compliance, auditability, and security in ML models Anomaly detection only Hyperparameter optimization 9 / 49 What is the main purpose of MLOps? To automate cloud billing processes To integrate ML models into production through CI/CD pipelines To replace software engineering practices To build web applications 10 / 49 Which monitoring metric is MOST relevant in MLOps? Website traffic Number of Git commits Model accuracy and drift detection CPU utilization only 11 / 49 Which of the following best describes the goal of AIOps? Applying AI/ML techniques to IT operations for proactive issue detection Replacing DevOps entirely Automating CI/CD pipelines without monitoring Automating infrastructure scaling only 12 / 49 What is the difference between DevOps and MLOps? DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps is only about data collection MLOps replaces DevOps entirely DevOps is only for cloud computing 13 / 49 Which of the following describes Continuous Training (CT) in MLOps? Scaling infrastructure on demand Running unit tests for ML code Re-training models regularly with new data Deploying models continuously without validation 14 / 49 What is the role of continuous validation in MLOps Improves GPU performance Tracks Git commits Ensures deployed models remain accurate and reliable with new data Reduces network traffic 15 / 49 What is the purpose of data drift detection? To identify changes in input data distribution affecting model performance To detect server failures To version-control datasets To optimize CI/CD runtime 16 / 49 Which algorithm is often used in AIOps for log anomaly detection? Naive Bayes only LSTM (Long Short-Term Memory) networks Static Regex Matching Decision Trees for UI 17 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Photoshop Jenkins Final Cut Pro MS Word 18 / 49 Which of the following tools is commonly associated with AIOps? Moogsoft Apache Spark Terraform Kubernetes 19 / 49 Why is monitoring critical after model deployment? To reduce developer workload To reduce hardware costs To speed up CI builds only To detect performance degradation and drift 20 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model monitoring Model deployment Model destruction Model training 21 / 49 What is the purpose of a model registry in MLOps? To manage Kubernetes clusters To store cloud infrastructure templates To track CI/CD pipeline executions To store, version, and manage trained ML models 22 / 49 What is a key advantage of using AIOps in incident management? Manual intervention for faster resolutions Increased number of false alerts Proactive anomaly detection and root cause analysis Replacing monitoring tools entirely 23 / 49 What is the role of GitOps in MLOp? Running hyperparameter optimization Training ML models Visualizing anomalies Managing ML infrastructure and deployments declaratively through Git 24 / 49 Which of the following is an example of CI/CD for ML models? Skipping version control Automating retraining, testing, and deployment of models Manual model validation Running experiments locally only 25 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: User interface design Data preprocessing and feature transformations Network bandwidth Operating system drivers 26 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Airflow only Nagios Splunk Kubeflow Pipelines 27 / 49 What is the role of Kubernetes in MLOps pipelines Scaling and orchestrating ML workloads in production Model evaluation Data preprocessing Hyperparameter tuning only 28 / 49 What is blue-green deployment in ML pipelines? Maintaining two identical environments (blue and green) to switch traffic safely during updates Running models in GPUs only Using two ML algorithms simultaneously Splitting training datasets randomly 29 / 49 Which metric is best for evaluating classification models in imbalanced dataset? CPU usage Precision-Recall AUC Accuracy only Mean Squared Error 30 / 49 Which challenge does AIOps primarily address? Lack of cloud cost optimization Manual analysis of large-scale operational data Limited access to GitHub repositories Inability to run unit tests 31 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Customer satisfaction surveys Unstructured IT operations data like logs, metrics, and traces Structured business data Video and image datasets 32 / 49 Which of the following is a common model deployment pattern? Git Rebase Deployment Blue-Green Deployment Round-Robin Compilation Static Scaling 33 / 49 Which of the following ensures fairness and bias detection in ML models? Responsible AI practices and monitoring Skipping validation Using random data Relying on accuracy only 34 / 49 In MLOps, what is 'model lineage? Monitoring server uptime Tracking datasets, code, and parameters that produced a model Measuring network latency Versioning HTML files 35 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Creating CI/CD pipelines Training computer vision models Provisioning infrastructure Parsing log files and correlating incidents 36 / 49 What is the purpose of MLflow in MLOps? Database sharding Log analysis Experiment tracking, model registry, and deployment Container orchestration 37 / 49 Which tool is widely used for managing ML pipelines? Nagios Kubeflow Terraform Jenkins 38 / 49 Which stage in MLOps involves hyperparameter tuning? Incident management Deployment Monitoring Model training & optimization 39 / 49 Which is a key output of anomaly detection in AIOps? Application code coverage CI/CD deployment reports Identified unusual events that may indicate system issues Optimized hyperparameters 40 / 49 Why is explainability important in production ML models? To reduce CI/CD runtime To reduce deployment frequency To increase data size To understand model decisions and build trust with stakeholders 41 / 49 What is the main role of Docker in MLOps pipelines? To perform hyperparameter tuning To act as a monitoring dashboard To analyze log anomalies To containerize ML models for consistent deployment 42 / 49 What is a common challenge in automating ML pipelines? Data versioning and reproducibility Writing HTML code Cloud billing alerts Automating UI testing 43 / 49 . What does a feature store provide in MLOps? A monitoring dashboard A CI/CD orchestrator A centralized repository for storing and sharing ML features A code versioning platform 44 / 49 What is 'model rollback' in CI/CD pipelines Reverting to a previous stable model when the new one fails Restarting the server Re-training from scratch Resetting hyperparameters 45 / 49 What is online learning in ML deployment Batch scoring only Offline retraining every month Updating the model incrementally with streaming data Deploying only during office hours 46 / 49 Which of the following is an example of predictive analytics in AIOps? Real-time log streaming Static capacity planning Forecasting disk failures before they occur Manual root cause analysis 47 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Apache Airflow Jenkins only Excel Nagios 48 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Tableau only PowerPoint Prometheus & Grafana Slack 49 / 49 In MLOps, what is 'model drift'? When hyperparameters remain constant When models crash during deployment When model performance degrades due to changes in data patterns When the model is moved between servers Your score is Share this: Click to share on Facebook (Opens in new window) Facebook Click to share on X (Opens in new window) X