AI/ML Screening TestBy Vikas Sharma / 19/08/2025 AI/ML Screening Test 1 / 49 Which of the following ensures fairness and bias detection in ML models? Skipping validation Responsible AI practices and monitoring Relying on accuracy only Using random data 2 / 49 Which of the following tools integrates monitoring into MLOps pipelines? Prometheus & Grafana PowerPoint Slack Tableau only 3 / 49 In MLOps, what is 'model drift'? When models crash during deployment When the model is moved between servers When hyperparameters remain constant When model performance degrades due to changes in data patterns 4 / 49 What is the main role of Docker in MLOps pipelines? To analyze log anomalies To act as a monitoring dashboard To containerize ML models for consistent deployment To perform hyperparameter tuning 5 / 49 Which type of data is MOST commonly analyzed by AIOps platforms? Structured business data Unstructured IT operations data like logs, metrics, and traces Customer satisfaction surveys Video and image datasets 6 / 49 Which AI technique is commonly used in AIOps for anomaly detection? Linear regression only Manual log parsing Rule-based filtering Clustering algorithms 7 / 49 . What role does Natural Language Processing (NLP) play in AIOps? Creating CI/CD pipelines Parsing log files and correlating incidents Provisioning infrastructure Training computer vision models 8 / 49 What is the purpose of data drift detection? To optimize CI/CD runtime To detect server failures To identify changes in input data distribution affecting model performance To version-control datasets 9 / 49 Why is monitoring critical after model deployment? To detect performance degradation and drift To speed up CI builds only To reduce developer workload To reduce hardware costs 10 / 49 What is the difference between DevOps and MLOps? MLOps replaces DevOps entirely DevOps focuses on CI/CD for software, while MLOps extends it to ML models with added steps like training and monitoring DevOps is only for cloud computing MLOps is only about data collection 11 / 49 Which of the following best describes the goal of AIOps? Automating infrastructure scaling only Automating CI/CD pipelines without monitoring Applying AI/ML techniques to IT operations for proactive issue detection Replacing DevOps entirely 12 / 49 Why is explainability important in production ML models? To reduce CI/CD runtime To reduce deployment frequency To understand model decisions and build trust with stakeholders To increase data size 13 / 49 Which of the following is an example of predictive analytics in AIOps? Manual root cause analysis Real-time log streaming Forecasting disk failures before they occur Static capacity planning 14 / 49 Which of the following best describes model governance Processes ensuring compliance, auditability, and security in ML models Visualization dashboards Hyperparameter optimization Anomaly detection only 15 / 49 Which challenge does AIOps primarily address? Lack of cloud cost optimization Limited access to GitHub repositories Manual analysis of large-scale operational data Inability to run unit tests 16 / 49 Which orchestrator is commonly used for ML pipelines in Kubernetes? Airflow only Nagios Splunk Kubeflow Pipelines 17 / 49 Which algorithm is often used in AIOps for log anomaly detection? Naive Bayes only Static Regex Matching LSTM (Long Short-Term Memory) networks Decision Trees for UI 18 / 49 Which of the following is a common model deployment pattern? Round-Robin Compilation Static Scaling Blue-Green Deployment Git Rebase Deployment 19 / 49 What is the purpose of MLflow in MLOps? Database sharding Container orchestration Experiment tracking, model registry, and deployment Log analysis 20 / 49 What is online learning in ML deployment Updating the model incrementally with streaming data Deploying only during office hours Batch scoring only Offline retraining every month 21 / 49 Which is a key output of anomaly detection in AIOps? Application code coverage Identified unusual events that may indicate system issues Optimized hyperparameters CI/CD deployment reports 22 / 49 Which of the following is NOT a stage in the MLOps lifecycle? Model monitoring Model destruction Model deployment Model training 23 / 49 How does AIOps reduce 'alert fatigue? By generating more alerts By disabling monitoring tools By automating deployments only By correlating events and suppressing noise 24 / 49 Which CI/CD tool is widely integrated with MLOps pipelines? Jenkins MS Word Final Cut Pro Photoshop 25 / 49 Which of the following is an example of CI/CD for ML models? Manual model validation Skipping version control Running experiments locally only Automating retraining, testing, and deployment of models 26 / 49 Which of the following tools is commonly associated with AIOps? Terraform Kubernetes Moogsoft Apache Spark 27 / 49 What is the main purpose of MLOps? To build web applications To replace software engineering practices To automate cloud billing processes To integrate ML models into production through CI/CD pipelines 28 / 49 What is the role of Kubernetes in MLOps pipelines Hyperparameter tuning only Model evaluation Data preprocessing Scaling and orchestrating ML workloads in production 29 / 49 What is blue-green deployment in ML pipelines? Running models in GPUs only Maintaining two identical environments (blue and green) to switch traffic safely during updates Splitting training datasets randomly Using two ML algorithms simultaneously 30 / 49 What is 'model rollback' in CI/CD pipelines Resetting hyperparameters Restarting the server Reverting to a previous stable model when the new one fails Re-training from scratch 31 / 49 Which tool is widely used for managing ML pipelines? Jenkins Terraform Kubeflow Nagios 32 / 49 What is Canary Deployment in MLOps? Gradually rolling out a model to a subset of users before full release Deploying multiple models in parallel permanently Deploying models only in staging Deploying models without validation 33 / 49 Which of the following ensures reproducibility in ML experiments? Avoiding CI/CD Versioning code, data, and models Manual hyperparameter tuning only Skipping documentation 34 / 49 Which stage in MLOps involves hyperparameter tuning? Model training & optimization Deployment Incident management Monitoring 35 / 49 Which cloud service provides a fully managed ML pipeline solution? Kubernetes without ML Photoshop Cloud AWS SageMaker Pipelines VMware vSphere 36 / 49 What does CI/CD integration with model registry achieve? Tracks GitHub issues only Automates promotion of validated models to production Simplifies HTML rendering Improves IDE performance 37 / 49 In MLOps, what is 'model lineage? Measuring network latency Monitoring server uptime Tracking datasets, code, and parameters that produced a model Versioning HTML files 38 / 49 . What is shadow deployment in MLOps? Deploying on shadow servers only Deploying without monitoring Deploying only half the model Running a new model in parallel with the current one without serving predictions to users 39 / 49 What is the role of GitOps in MLOp? Managing ML infrastructure and deployments declaratively through Git Running hyperparameter optimization Visualizing anomalies Training ML models 40 / 49 In a CI/CD pipeline, unit tests for ML models typically validate: Network bandwidth User interface design Data preprocessing and feature transformations Operating system drivers 41 / 49 . What does a feature store provide in MLOps? A monitoring dashboard A code versioning platform A CI/CD orchestrator A centralized repository for storing and sharing ML features 42 / 49 What is a key advantage of using AIOps in incident management? Manual intervention for faster resolutions Replacing monitoring tools entirely Increased number of false alerts Proactive anomaly detection and root cause analysis 43 / 49 What is the role of continuous validation in MLOps Tracks Git commits Reduces network traffic Ensures deployed models remain accurate and reliable with new data Improves GPU performance 44 / 49 Which metric is best for evaluating classification models in imbalanced dataset? Accuracy only Precision-Recall AUC CPU usage Mean Squared Error 45 / 49 . Which tool is commonly used for workflow orchestration in ML pipelines? Excel Apache Airflow Jenkins only Nagios 46 / 49 Which of the following describes Continuous Training (CT) in MLOps? Running unit tests for ML code Scaling infrastructure on demand Deploying models continuously without validation Re-training models regularly with new data 47 / 49 What is a common challenge in automating ML pipelines? Writing HTML code Automating UI testing Cloud billing alerts Data versioning and reproducibility 48 / 49 Which monitoring metric is MOST relevant in MLOps? CPU utilization only Website traffic Model accuracy and drift detection Number of Git commits 49 / 49 What is the purpose of a model registry in MLOps? To store cloud infrastructure templates To track CI/CD pipeline executions To store, version, and manage trained ML models To manage Kubernetes clusters Your score is Share this: Share on Facebook (Opens in new window) Facebook Share on X (Opens in new window) X