AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 How does AIOps reduce 'alert fatigue? By disabling monitoring tools By generating more alerts By automating deployments only By correlating events and suppressing noise 2 / 49 Which challenge does AIOps primarily address? Lack of cloud cost optimization Limited access to GitHub repositories Manual analysis of large-scale operational data Inability to run unit tests 3 / 49 What is a key advantage of using AIOps in incident management? Manual intervention for faster resolutions Increased number of false alerts Proactive anomaly detection and root cause analysis Replacing monitoring tools entirely 4 / 49 What is the purpose of MLflow in MLOps? Database sharding Experiment tracking, model registry, and deployment Log analysis Container orchestration 5 / 49 Which tool is widely used for managing ML pipelines? Kubeflow Nagios Terraform Jenkins 6 / 49 Which of the following tools is commonly associated with AIOps? Terraform Moogsoft Kubernetes Apache Spark 7 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Clustering algorithms Manual log parsing Linear regression only Rule-based filtering 8 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Splunk Kubeflow Pipelines Nagios Airflow only 9 / 49 Which is a key output of anomaly detection in AIOps? Identified unusual events that may indicate system issues CI/CD deployment reports Application code coverage Optimized hyperparameters 10 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying only half the model Running a new model in parallel with the current one without serving predictions to users Deploying without monitoring 11 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? MS Word Photoshop Final Cut Pro Jenkins 12 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Nagios Jenkins only Apache Airflow Excel 13 / 49 Which of the following describes Continuous Training (CT) in MLOps? Re-training models regularly with new data Deploying models continuously without validation Scaling infrastructure on demand Running unit tests for ML code 14 / 49 Which of the following is an example of CI/CD for ML models? Automating retraining, testing, and deployment of models Manual model validation Running experiments locally only Skipping version control 15 / 49 What is the difference between DevOps and MLOps? DevOps is only for cloud computing MLOps is only about data collection DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring MLOps replaces DevOps entirely 16 / 49 What is the main role of Docker in MLOps pipelines? To perform hyperparameter tuning To act as a monitoring dashboard To analyze log anomalies To containerize ML models for consistent deployment 17 / 49 Which algorithm is often used in AIOps for log anomaly detection? Naive Bayes only Static Regex Matching Decision Trees for UI LSTM (Long Short-Term Memory) networks 18 / 49 Which of the following is an example of predictive analytics in AIOps? Static capacity planning Manual root cause analysis Forecasting disk failures before they occur Real-time log streaming 19 / 49 In MLOps, what is 'model drift'? When models crash during deployment When hyperparameters remain constant When model performance degrades due to changes in data patterns When the model is moved between servers 20 / 49 What is a common challenge in automating ML pipelines? Data versioning and reproducibility Automating UI testing Cloud billing alerts Writing HTML code 21 / 49 What is the main purpose of MLOps? To automate cloud billing processes To replace software engineering practices To integrate ML models into production through CI/CD pipelines To build web applications 22 / 49 What is blue-green deployment in ML pipelines? Running models in GPUs only Splitting training datasets randomly Using two ML algorithms simultaneously Maintaining two identical environments (blue and green) to switch traffic safely during updates 23 / 49 What is the purpose of a model registry in MLOps? To manage Kubernetes clusters To store, version, and manage trained ML models To store cloud infrastructure templates To track CI/CD pipeline executions 24 / 49 In MLOps, what is 'model lineage? Tracking datasets, code, and parameters that produced a model Versioning HTML files Monitoring server uptime Measuring network latency 25 / 49 Which of the following tools integrates monitoring into MLOps pipelines? PowerPoint Slack Tableau only Prometheus & Grafana 26 / 49 Which of the following ensures reproducibility in ML experiments? Manual hyperparameter tuning only Versioning code, data, and models Avoiding CI/CD Skipping documentation 27 / 49 Which stage in MLOps involves hyperparameter tuning? Monitoring Deployment Model training & optimization Incident management 28 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Data preprocessing and feature transformations Operating system drivers Network bandwidth User interface design 29 / 49 What is the role of continuous validation in MLOps Tracks Git commits Ensures deployed models remain accurate and reliable with new data Reduces network traffic Improves GPU performance 30 / 49 . What does a feature store provide in MLOps? A code versioning platform A centralized repository for storing and sharing ML features A monitoring dashboard A CI/CD orchestrator 31 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Video and image datasets Structured business data Customer satisfaction surveys Unstructured IT operations data like logs, metrics, and traces 32 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Mean Squared Error CPU usage Accuracy only Precision-Recall AUC 33 / 49 Why is explainability important in production ML models? To reduce deployment frequency To reduce CI/CD runtime To understand model decisions and build trust with stakeholders To increase data size 34 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model destruction Model deployment Model training Model monitoring 35 / 49 Which monitoring metric is MOST relevant in MLOps? Model accuracy and drift detection Number of Git commits CPU utilization only Website traffic 36 / 49 Which cloud service provides a fully managed ML pipeline solution? AWS SageMaker Pipelines VMware vSphere Photoshop Cloud Kubernetes without ML 37 / 49 What is Canary Deployment in MLOps? Deploying models without validation Gradually rolling out a model to a subset of users before full release Deploying models only in staging Deploying multiple models in parallel permanently 38 / 49 Why is monitoring critical after model deployment? To speed up CI builds only To reduce developer workload To reduce hardware costs To detect performance degradation and drift 39 / 49 Which of the following ensures fairness and bias detection in ML models? Using random data Skipping validation Relying on accuracy only Responsible AI practices and monitoring 40 / 49 What does CI/CD integration with model registry achieve? Automates promotion of validated models to production Tracks GitHub issues only Improves IDE performance Simplifies HTML rendering 41 / 49 What is the role of GitOps in MLOp? Running hyperparameter optimization Managing ML infrastructure and deployments declaratively through Git Visualizing anomalies Training ML models 42 / 49 Which of the following is a common model deployment pattern? Round-Robin Compilation Blue-Green Deployment Static Scaling Git Rebase Deployment 43 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Creating CI/CD pipelines Parsing log files and correlating incidents Training computer vision models Provisioning infrastructure 44 / 49 Which of the following best describes the goal of AIOps? Applying AI/ML techniques to IT operations for proactive issue detection Replacing DevOps entirely Automating infrastructure scaling only Automating CI/CD pipelines without monitoring 45 / 49 What is the role of Kubernetes in MLOps pipelines Model evaluation Scaling and orchestrating ML workloads in production Hyperparameter tuning only Data preprocessing 46 / 49 What is 'model rollback' in CI/CD pipelines Restarting the server Resetting hyperparameters Re-training from scratch Reverting to a previous stable model when the new one fails 47 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Anomaly detection only Hyperparameter optimization Visualization dashboards 48 / 49 What is online learning in ML deployment Batch scoring only Offline retraining every month Deploying only during office hours Updating the model incrementally with streaming data 49 / 49 What is the purpose of data drift detection? To version-control datasets To identify changes in input data distribution affecting model performance To optimize CI/CD runtime To detect server failures Your score is Share this: Share on Facebook (Opens in new window) Facebook Share on X (Opens in new window) X