Optimized Approach for Finding Largest Element in an Array . Approach: 1) Initialize a variable named "largest" with first element of an array. 2) Compare the "largest" with every element in an array . 3) If element in the array (arr[i]) is greater than "largest" then initialize "largest" with that element i.e arr[i]. 4) Print "largest" which is the largest element in the array. Program: #include<iostream> using namespace std; int main() { int n,largest; cin>>n; int arr[n]; for(int i=0;i<n;i++) { cin>>arr[i]; } largest=arr[0]; for(int i=1;i<n;i++) { if(arr[i]>largest) { largest=arr[i]; } } cout<<"Largest Element in an array is:"<<largest; return 0; } Output: Time Complexity : O(N)
What is dimensionality reduction? Dimensionality reduction is an unsupervised machine learning technique this will helps us to reduce large number of columns into small number of columns. For example, we have the dataset which consists of 500 columns (C 1, C 2, C 3,….. C 500 ) and a class variable. 1) If a dataset consists of 500,1000 and a larger number of columns i.e. huge amount of data, to fit this kind of data through any machine learning algorithm takes more amount of time for processing. So, there is a way to shrink this data into lesser number of columns i.e. 4 or whatever number of columns which is Dimensionality Reduction. 2) This reduced data is a new data which consists of lesser number of dimensions (columns). 3) On this reduced data, we can fit machine learning algorithms. 4) The process of reducing high dimensional data into low dimensional data varies from one technique to another technique. Disadvantages of High dimensionality ...