From time to time, I found myself in need to sort a large (more than 100,000 objects) array data, unfortunately the built in JavaScript Array.sort([compareFunction]) is not the fastest and have one major problem for my personal use, the default implementation of Array.sort() function is without compareFunction, but this default behaviour will assume each element in array is a string, so elements will be compared by taking every single character as UNICODE point value for comparion.
var fruitNames = [ 'avocado' , 'blueberry' , 'coconut' , 'cherry' , 'banana' , 'orange' , 'apple' ]; fruitNames.sort(); // result: ["apple", "avocado", "banana", "blueberry", "cherry", "coconut", "orange"] // without compareFunction, each element will be treated as 'string' (even though it is a number) var myNumbers = [4,7,9,1,2,5,3,8,6,12,11,10]; myNumbers.sort(); // result: [1, 10, 11, 12, 2, 3, 4, 5, 6, 7, 8, 9] // we need to use custom 'compareFunction' for us to compare each element as number value myNumbers.sort((a,b) => ab); // result: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
This default behaviour to treat each element as string is slowing the performance of sorting, my personal use of sorting algorithm is mostly to sort numbers, I've searched and learned a few other JavaScript solutions and in this blog I will try to optimize QuickSort algorithm and how to make it faster than the currently builtin Array.sort() implemented in Chrome, Firefox and Safari.
Why I choose to optimize QuickSort algorithm?
 Famous: many programmers already know about it, it is proven and use in C stdlib library qsort().
 Inplace: directly change/update the input array, use little memory and fast but unstable.
 Simple: the code size is tiny and easy to understand and maintain later.
There are many variants of QuickSort implementations, this blog only show 2 variants, the original QuickSort is from the creator Charles Antony Richard Hoare (Tony Hoare) and the newer partition scheme is from Nico Lomuto, the reason I've included Nico Lomuto method is because it is simpler and easier to understand the logic, hence included for comparison and learning purpose, below is barebone classic QuickSort functions.
function quickSort_by_Tony_Hoare(arr, startIndex, endIndex) { // this is the original of QuickSort algorithm by Tony Hoare if(endIndex > startIndex) { var partitionIndex = partition_by_Tony_Hoare(arr, startIndex, endIndex); // the item at partitionIndex will be included in recursive sorting lower values. // recursion to sort lower values quickSort_by_Tony_Hoare(arr, startIndex, partitionIndex); // recursion to sort higher values quickSort_by_Tony_Hoare(arr, partitionIndex + 1, endIndex); } return arr; } function partition_by_Tony_Hoare(arr, startIndex, endIndex) { // original partitioning scheme by creator of QuickSort // https://en.wikipedia.org/wiki/Tony_Hoare // WARNING: if selected pivot is the last position (endIndex) // AND it is the largest value then INFINITE LOOP problem occurred! //var pivot = arr[startIndex]; var pivot = selectPivot(arr, startIndex, endIndex, false); let left = startIndex, right = endIndex; do { // find bigger than pivot value from bottom while(pivot > arr[left]) { left++; } // find lower than pivot value from top while(arr[right] > pivot) { right; } // are positions valid for swap ? if(left >= right) { // not valid, so exit return right; } swap(arr, left, right); left++; right; } while(true); // loop forever until break } function quickSort_by_Nico_Lomuto(arr, startIndex, endIndex) { // using Nico Lomuto partition scheme // simpler and easier to understand. if(endIndex > startIndex) { var partitionIndex = partition_by_Nico_Lomuto(arr, startIndex, endIndex); // the item at partitionIndex will not be included in recursive sorting because // arr[partitionIndex] >= [...lowers] // [...highers] >= arr[partitionIndex] // recursion to sort lower values quickSort_by_Nico_Lomuto(arr, startIndex, partitionIndex  1); // recursion to sort higher values quickSort_by_Nico_Lomuto(arr, partitionIndex + 1, endIndex); } return arr; } function partition_by_Nico_Lomuto(arr, startIndex, endIndex) { // easier to implement and understand //var pivot = arr[startIndex]; // Lomuto partitioning has worst case if selected pivot value is LARGEST value in the range! // prevent worst case by carefully selecting pivot value! var pivot = selectPivot(arr, startIndex, endIndex, true); // true = MUST do swapping ! var i = startIndex; // one time loop from bottom to the second from top, because pivot is the top position for(j = startIndex; endIndex > j; j++) { // is current element is smaller than or equal to pivot ? if(pivot >= arr[j]) { // swap swap(arr, i, j); i++; } } // swap swap(arr, i, endIndex); return i; } function selectPivot(arr, startIndex, endIndex, doSwap) { // find a pivot value which not the lowest value within the range // Get 2 UNIQUE elements, if failed then it means all elements are same value. var pivot = arr[startIndex]; // get first element from the first position // try to find a different element value var j = endIndex; while(pivot == arr[j] && j >= startIndex) { j; } if(startIndex > j) { //console.log('selectPivot(arr, ' + startIndex + ',' + endIndex + '), all elements are equal, nothing to sort'); return pivot; } // check which element is lower? // use the lower value as pivot and swap the position with the last position (endIndex) if(pivot > arr[j]) { pivot = arr[j]; if(doSwap) { swap(arr, j, endIndex); } } else { if(doSwap) { swap(arr, startIndex, endIndex); } } return pivot; } function swap(arr, a, b) { // replace more than 1 element value in array using 1 line // this ability is 'ES6 destructuring swap', // only specific for Javascript language // but VERY VERY SLOW, almost 3 times slower ! //[arr[a], arr[b]] = [arr[b], arr[a]]; // normal way for many programming language var _swap_temp = arr[a]; arr[a] = arr[b]; arr[b] = _swap_temp; }
The code above can be easily found in internet, there are some other QuickSort variant implementations but the above code is the 'typical' code, the code is just use recursive call and because using recursion sometimes depends on the data the recursion went too deep and causing error 'Uncaught RangeError: Maximum call stack size exceeded' in browsers.
The solution to solve this recursion problem is to not use it at all, not all programming language compilers use Tail Call Optimization (TCO), so we can't be dependent on TCO especially for JavaScript in browser. I've implemented a simple stack data structure to avoid recursion, please see the new function quickSort_by_Tony_Hoare_non_recursive() below, this function is meant to replace quickSort_by_Tony_Hoare() later but right now I don't want to replace it by using the same function name because I want to do speed comparison.
function quickSort_by_Tony_Hoare_non_recursive(arr) { 'use strict'; if(!arr  1 > arr.length) { return null; } var arrLength = arr.length; var startIndex = 0, endIndex = arrLength  1; // don't use Array.push() and Array.pop() because too slow // use 2 arrays instead of 1 to avoid unnecessary increasing and reducing stackLength var stackStartIndex = [], stackEndIndex = []; var stackLength = 0; var partitionIndex; var i, j, is_key; do { partitionIndex = partition_by_Tony_Hoare(arr, startIndex, endIndex); if(partitionIndex > startIndex) { // there is lower values to partition // is there higher values? if(endIndex > partitionIndex + 1) { // we don't do it now, push it into stack for later stackStartIndex[stackLength] = partitionIndex + 1; stackEndIndex[stackLength] = endIndex; stackLength++; // increase counter for next slot } // set new parameter to partition lower values endIndex = partitionIndex; } else if(endIndex > partitionIndex + 1) { // there is no lower values, only higher value, this is worst case! // set new parameter for next partitioning startIndex = partitionIndex + 1; } else { // no valid partitioning index, so we get from stack (if any) if(stackLength > 0) { stackLength; startIndex = stackStartIndex[stackLength]; endIndex = stackEndIndex[stackLength]; } else { break; // finished ! } } } while(endIndex > startIndex); return arr; }
As usual, we as developers always test our implementation, especially before we begin to go to next step to optimize the algorithm, so for testing I use the following function to generate 1 million random numbers.
function generateRandomNumberArray(total) { if(!total  1 > total) { return; } else if(total > 3e6) { // 3 * Math.pow(10,6) // limit max to 3 millions, to avoid operation too long total = 3e6; } function getRandomNumber(max) { //return Math.floor(Math.random() * Math.floor(max)); // slowest //return (Math.random() * Math.floor(max))  0; // faster return ~~(Math.random() * Math.floor(max)); // fastest > use double bitwise NOT } return Array(total).fill(0).map(val => getRandomNumber(total)); } var randomNumbers = generateRandomNumberArray(2e6);
A simple code below can verify whether the sorting algorithm work properly.
function verifySortedData(arr, startIndex = 0, endIndex, isPrintOutput = true) { if(!arr  1 > arr.length) { console.log('verifySortedData(...) : no data'); return false; } endIndex = endIndex  arr.length  1; // test ASCENDING order (current element >= previous element) //var isSorted = arr.every((val, idx) => idx == 0  val >= arr[idx1]); var i, invalidIndex = 1; for(i = startIndex; endIndex >= i; i++) { if(i > 0) { if(arr[i  1] > arr[i]) { invalidIndex = i; break; } } } var msg = (0 > invalidIndex ? 'PASSED VERIFICATION' : '* FAILED *') + ', verifySortedData(...), length: ' + arr.length; if(isPrintOutput) { printOutput(msg); } if(invalidIndex > 0) { // throw new Error('**** FAILED at index: ' + invalidIndex); return false; // failed, data is not sorted properly! } return true; // passed }
At this point we already have everything ready for test and benchmarking, let's compare 4 ways
 Native JS Array.sort()
 QuickSort by Tony Hoare (classic recursive)
 QuickSort by Tony Hoare (nonrecursive)
 QuickSort by Nico Lomuto (recursive)
function testSortingRandomData(total) { if(!total) { total = 1e6; // Math.pow(10,6); } randomNumbers = generateRandomNumberArray(total); printOutput(' start benchmarking '); // make copy var data = [...randomNumbers]; // try with native JS Array.sort() var startTimeMs = Date.now(); data.sort((a,b) => ab); var elapsedTimeMs = Date.now()  startTimeMs; printOutput('1. Native JS Array.sort(), elapsed time: ' + elapsedTimeMs + 'ms'); // verify it verifySortedData(data); // try with our reference implementation of QuickSort by Tony Hoare // to be fair, we use the same data, so copy again data = [...randomNumbers]; startTimeMs = Date.now(); quickSort_by_Tony_Hoare(data, 0, data.length  1); var elapsedTimeMs = Date.now()  startTimeMs; printOutput('2. QuickSort by Tony Hoare (classic recursive), elapsed time: ' + elapsedTimeMs + 'ms'); // verify it verifySortedData(data); // do Tony Hoare without recursion data = [...randomNumbers]; startTimeMs = Date.now(); quickSort_by_Tony_Hoare_non_recursive(data); var elapsedTimeMs = Date.now()  startTimeMs; printOutput('3. QuickSort by Tony Hoare (nonrecursive), elapsed time: ' + elapsedTimeMs + 'ms'); // verify it verifySortedData(data); // do Nico Lomuto data = [...randomNumbers]; startTimeMs = Date.now(); quickSort_by_Nico_Lomuto(data, 0, data.length  1); var elapsedTimeMs = Date.now()  startTimeMs; printOutput('4. QuickSort by Nico Lomuto (recursive), elapsed time: ' + elapsedTimeMs + 'ms'); // verify it verifySortedData(data); } var eleBenchmarkResult; function printOutput(text) { if(!eleBenchmarkResult) { eleBenchmarkResult = document.getElementById('benchmark_result'); } console.log(text); eleBenchmarkResult.value += "\n" + text; } function clearOutput() { if(!eleBenchmarkResult) { eleBenchmarkResult = document.getElementById('benchmark_result'); } eleBenchmarkResult.value = ''; }
Please try to click the above button to see the result in your device, try to run the benchmark a few times to see the average time, because the running time normally a few hundreds milliseconds in computer/laptop so this is only a 'microbenchmarking', the elapsed time is fluctuating and can't be accurate, to make proper benchmarking we need to use much larger data and run it multiple times to get average time.
We can see the data is sorted properly (passed verification), in my macbook (Intel CPU) I could see consistenly that Tony Hoare nonrecursive algorithm is the fastest but in my smartphone (using Samsung Exynos CPU) I can see Nico Lumuto (recursive and nonoptimized) is fastest, it is strange and shocking to see how CPU architecture affected the speed a lot, sadly I also noticed the speed different with JS builtin Array.sort() is not much, now I am ready for the challenge to go to the next step to optimize the code to squeeze every little milliseconds.
Quicksort algorithm is using 'divideandconquer' logic, the partitioning will try to divide an array into 2 regions, the best way to choose pivot value is using the real median value, not the first nor middle nor last element, the real median value will split the 2 regions exactly in the middle so each side has equal size (balanced) of lower value elements and higher value elements, unfortunately we can't get the real median value because the array is not a sorted order yet, I've tried a few different logics including finding better pivot value using 'median of 3' (first, middle, last) method below.
function get_median_value_of_3(a, b, c) { // a,b,c are VALUE (not position) if(a > b) { if(c > b) { if(a > c) { return c; } else { return a; } } } else if(b > c) { if(a >= c) { return a; } else { return c; } } return b; }
In my testing using pure generatedrandom data, I found that using get_median_value_of_3(..) is causing some overhead, it may get a little improvement on certain data but the general performance is slower, in the end I personally feel it is not worth it.
Going sideway a little, Timsort algorithm (quoted from wiki) "Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most realworld data, natural runs. It iterates over the data collecting elements into runs, and simultaneously merging those runs together. When there are runs, doing this decreases the total number of comparisons needed to fully sort the list."
The statement above helped me to understand that a generatedrandom data may not have any area which has alreadysorteddata but realworld data may have it, so a generatedrandom data has different characteristic compared to realworld data, maybe in the future when I have much larger realworld data then I'll revisit this blog and do more testing to use get_median_value_of_3(..) or I will try to find a faster algorithm than Find Median.
QuickSort partitioning will divide the array down to the smallest count which is 2 elements, so the first and the easiest task to optimize the code is to remove the last loop of partitioning which handling 2 or 3 elements as described by the code below.
function sort_2_or_3_elements(arr, a, b, c) { /** * a, b and c are INDEX POSITION (not value) * a & b are mandatory, must exist * c is optional, if exist then must follow: c > b * * NOTE: this is ascending order ! */ if(arr[a] > arr[b]) swap(arr,a,b); // is c exist and valid value (c > b)? if(c > b) { if(arr[b] > arr[c]) { swap(arr, b, c); if(arr[a] > arr[b]) { swap(arr, a, b); } } } }
The function sort_2_or_3_elements(..) is good to remove the last partitioning loop but only give small speed improvement, so I continue to find faster way, many articles discussing to combine another sorting algorithm with QuickSort to improve the speed, in my testing I found it to be true, combining QuickSort with an auxiliary sorting algorithm produces faster sorting time. The code below are the 3 sorting algorithms that I have tried InsertionSort, BubbleSort and ShellSort.
function bubbleSort(arr, startIndex, endIndex) { for (let i = startIndex; endIndex > i; i++) { for (let j = startIndex; endIndex > j; j++) { if (arr[j] > arr[j + 1]) { swap(arr, j, j+1); } } } return arr; } function shellSort(arr, startIndex = 0, endIndex) { // NOTE: // 1. Shellsort is a generalization of insertion sort that allows the exchange of items that are far apart. // 2. Don't use ShellSort on array length more than 10K, very slow ! // 3. ShellSort perform best on partially sorted array !! // 4. Not stable BECAUSE using gap !!! if(!arr  2 > arr.length) { return arr; } const arrLength = arr.length; // default value if not defined endIndex = endIndex  arrLength  1; // anyone may adjust these gaps values to optimize, // other researchers: Knuth’s and Hibbard’s sequences is > k = 3k + 1 > ..., 40,13,4,1 //let prevValue = 1; //gaps = Array(20).fill(1).map((v, i) => prevValue = i == 0 ? 1 : (prevValue * 3) + 1); //gaps.reverse(); //let gaps = [797161, 265720, 88573, 29524, 9841, 3280, 1093, 364, 121, 40, 13, 4, 1]; // Marcin Ciura's "Best Increments for the Average Case of Shellsort" const gaps = [1750, 701, 301, 132, 57, 23, 10, 4, 1]; const gapCount = gaps.length; var gap, i, j, k, key; for (k = 0; gapCount > k; k++) { gap = gaps[k]; // NOTE: if gap = 1 then it will be similar to InsertionSort ! // === start of InsertionSort =========== insertionSort(arr, startIndex, endIndex, gap); // === end of InsertionSort =========== } return arr; } function insertionSort(arr, startIndex, endIndex, gap = 1) { var i, j, key; for (i = startIndex + gap; endIndex >= i; i += gap) { key = arr[i]; /* Move elements of arr[startIndex..i1], that are greater than key, to one position ahead of their current position */ /* j = i  1; while(j > = startIndex && compFn(arr[j], key) > 0) { arr[j + 1] = arr[j]; // next item j; } */ // use for() is faster than while() for (j = i  1; j >= startIndex; j = gap) { // swap DOWN, eg: 5>4, 4>3, 3>2, 2>1, 1> 0 ! if(arr[j] > key) { arr[j + 1] = arr[j]; continue; } break; } // swap arr[j + 1] = key; } return arr; }
All these 3 alternative sorting algorithms are using parameter startIndex and endIndex because in my testing with QuickSort need to sort partial inner array elements. Fortunately all these algorithms are easy to implement and test, also they use smaller code than QuickSort.
Before I show the optimized code below, I want to point out what are the changes that I've made to the original code and what I have learned about logic optimization in JavaScript language.
 The big improvement (faster execution) is come from using auxiliary inner sorting. In my testing with generatedrandom data if element count is more than 32 then it is faster to be sorted in QuickSort partition but InsertionSort (ShellSort with gap = 1) is faster for sorting 32 elements or less. We can change this hardcoded value to suit different data and machine.
 Using 'stack' data structure to avoid recursion call is necessary not only for speed but for stability, (NOTE: avoid using JavaScript Array.push() and Array.pop() for managing stack because they are slow).
 Avoid machine execution to jump very far, must use and declare local variables inside function to avoid slower execution, this is 'true logic fact' to all programming languages, because the code is small therefore combine all functions into a single function just to make sure the code run as fast as it could, some people may strongly disagree with this because it may cause maintenance problem in the future, but I found the final code is still small and just adding a little commentary inside code can help people to understand the logic.
 Avoid using any JavaScript Math apis such as Math.max(). Math.min() or other Math.*, use classic if() is better because less overhead.
 Reduce check condition like if(..) or while(..), make it as minimum as possible because even a single if() in many repetition loops will be noticeably slower, especially if the loop count more than a million, depending on some cases, sometimes better to have small code duplication to save one or two if()s.
The optimized code of including joined QuickSort (Hoare's nonrecursive) and InsertionSort (QuickInsertionSort)
function QuickInsertionSort(arr) { 'use strict'; if(!arr  1 > arr.length) { return null; } var startIndex = 0, endIndex = arr.length  1; // use 'stack' data structure to eliminate recursive call // DON'T use Array.push() and Array.pop() because slow !!! // so use manual indexing var stackLength = 0; // use 2 arrays instead of 1 array to fasten (reduce calculation of '+= 2' and '= 2') var startIndexes = []; var endIndexes = []; // variables for partitioning var partitionIndex, pivot, left, right, _swap_temp; // variables for insertion sort var i, j, key; do { // in my testing, I found 32 is very good choice for totally generatedrandom data, // more than 100 will cause slower speed overal. if(32 >= endIndex  startIndex) { // even using insertionSort, // still need this because it still come here !! if(1 == endIndex  startIndex) { if(arr[startIndex] > arr[endIndex]) { _swap_temp = arr[startIndex]; arr[startIndex] = arr[endIndex]; arr[endIndex] = _swap_temp; } } else { /************************************** ****** start of insertion sort ******** ***************************************/ for(i = startIndex + 1; endIndex >= i; i++) { key = arr[i]; // Move elements of arr[startIndex..i1], that are // greater than key, to one position ahead // of their current position for (j = i  1; j >= startIndex; j) { if(arr[j] > key) { arr[j + 1] = arr[j]; continue; } // use 'break' to avoid decreasing 'j' break; } // swap arr[j + 1] = key; } /************************************** ****** end of insertion sort ********** ***************************************/ } // continue to process next data, is there any data inside stack ? if(stackLength > 0) { // pop stackLength; // reduce counter to get the last position from stack startIndex = startIndexes[stackLength]; endIndex = endIndexes[stackLength]; } else { // no data inside stack, so finish break; } } else { // squeeze every millisecond by put main logic here instead of separate function // in my testing using median_of_3 does not give better result for generated totally random data !! /********************************************* *********** start of partitioning ************ ************* Tony Hoare ********************* **********************************************/ // minimize worst case scenario // === start of select pivot ============ pivot = arr[startIndex]; // try to find a different element value j = endIndex; while(pivot == arr[j] && j >= startIndex) { j; } if(j > startIndex) { // check which element is lower? // use the lower value as pivot if(pivot > arr[j]) { pivot = arr[j]; } } // === end of select pivot ============ left = startIndex; right = endIndex; do { while(pivot > arr[left]) { left++; } while(arr[right] > pivot) { right; } if(left >= right) { partitionIndex = right; break; } //swap(left, right); // because many swaps, so optimize to implement swap here ! _swap_temp = arr[left]; arr[left] = arr[right]; arr[right] = _swap_temp; left++; right; } while(true); // loop forever until break if(partitionIndex > startIndex) { // has lower partition, so process it if(endIndex > partitionIndex + 1) { // push 'right' side partition info into stack for later startIndexes[stackLength] = partitionIndex + 1; endIndexes[stackLength] = endIndex; stackLength++; // increase counter for NEXT slot } // prepare next loop // keep same value for startIndex but update endIndex endIndex = partitionIndex; } else if(endIndex > partitionIndex + 1) { // at this point, it means there is no 'lower' side partition but has 'higher' side partition // prepare next loop // keep same value for endIndex but update startIndex startIndex = partitionIndex + 1; } /********************************************* ****** end of Tony Hoare partitioning ******** **********************************************/ } } while(endIndex > startIndex); }
Benchmark result
Please note this is done in my machine and the result maybe different with other machine and browser version.
MacBook Pro (15 inch, 2017) Intel Core i7 (2.8 Ghz) macOS 10.14.5 
Chrome v75.0.3770.80 
Safari v12.1.1 
Firefox v67.0.1 

random 1 million numbers
Due to onthefly random number generator, therefore each browser will use different random data. NOTE: this is 'micro benchmarking' so not very accurate, but I am showing this value just to give general info about the performance, after I run it many many times (I've lost count and definitely more than 100 times).  
Time in milliseconds (1000ms = 1s)
FASTEST / AVG / SLOWEST

386/390/396
86/96/116 122/129/142 162/164/171 
394/449/497
108/143/187 250/255/260 378/386/393 
208/227/260
219/222/223 115/119/128 137/140/142 
Micro benchmarking result notes:
 The Javascript code implementation for QuickSort and InsertionSort is fastest compare to all builtin native JS Array.sort() in 3 browsers, it run fastest in Chrome followed by Safari then Firefox.
 Implementation of native Array.sort() performance between browsers is fastest in Firefox followed by Safari then Chrome.
 Firefox is strange, QuickInsertionSort() is having the same speed or slower than native JS Array.sort(), but native JS Array.sort() is slower than quickSort_by_Tony_Hoare_non_recursive() and quickSort_by_Nico_Lomuto(), I've tried many ways including putting into separate external .js file but same result, I can only guess that maybe Firefox like a few short functions instead of combined 1 large function.
 I did test to use custom compareFunction in this pure JS sorting implementation to make a fair comparison with JS native Array.sort((a,b) => ab), there is performance drop but only a little and not significant. I am guessing all browsers' builtin Array.sort(compareFunction) drop speed significantly due to callback from C/C++ to JavaScript, so moving back and forth will cause big performance impact.
For 1 million array the elapsed time around 100~300ms for all browsers, the different is small and this code is useful for larger data sets.
Summary: the pure JavaScript sorting algorithm implementation above is simple and already has many variant code existed in internet, I only have done a little optimizations and it can perform faster than current browsers (Chrome, Firefox and Safari) implementation, maybe someone will ask question 'is it worth to use it in a real project?', for me personally will answer 'yes, it is worth it', specifically for larger data set such as for ML (Machine Learning) which could have millions elements and repeatedly calling sorting for every new data, it definitely needs faster sorting algorithm, but for just typical small data (less than 10K elements) then the current builtin JS Array.sort() should be sufficient.
I hope browsers developers can improve and optimize their sorting algorithm or use different algorithm to make it faster than this pure JavaScript sorting algorithm, I could only dream that they could create a new sorting API without callback (compareFunction) only for sorting numbers in ascending order, but before that happens I will keep using this pure JavaScript sorting algorithm.
Keep calm and hack the planet ... ^.^