This is the 24th day of my participation in the August More Text Challenge

Lambda expressions

  • Lambda expressions are implemented internally by the INVOkeDynamic instruction
  • Lambda allows you to pass a function as an argument to a method, that is, a function as an argument to a method
  • Lambda expressions can be used to make the code more concise

Variable scope

  • Lambda expressions can only reference outer local variables that mark final. That is, local variables defined outside the scope cannot be modified inside a Lambda expression, otherwise an error will be reported
  • The outer local variables can be accessed directly in Lambda expressions
  • Outer local variables in Lambda expressions may not be declared final, but they must not be modified by subsequent code to implicitly have final semantics
  • Lambda expressions do not allow declarations of arguments or local variables with the same name as outer local variables

Use the sample

Anonymous inner class

  • Anonymous inner class:An anonymous inner class is still a class and does not need to be named; the compiler automatically names the class
    • Anonymous inner classes in Java:
    public class MainAnonymousClass {
      public static void main(String[] args) {
      	new Thread(new Runnable(){
      		@Override
      		public void run(a){
      			System.out.println("Anonymous Class Thread run()"); } }).start();; }}Copy the code
    • Implementing anonymous inner classes using Lambda expressions:
    public class MainLambda {
      public static void main(String[] args) {
      	new Thread(
      			() -> System.out.println("Lambda Thread run()") ).start();; }}Copy the code

Take ginseng function

  • Short for parameterized functions:
List<String> list = Arrays.asList("I"."love"."you"."too");
Collections.sort(list, new Comparator<String>() { / / interface name
	@Override
	public int compare(String s1, String s2) { / / the method name
		if(s1 == null)
			return -1;
		if(s2 == null)
			return 1;
		returns1.length() - s2.length(); }});Copy the code
  • The code implements the comparison logic by overriding the Comparator interface’s compare() method with the inner class. Lambda expressions can be abbreviated as follows:
List<String> list = Arrays.asList("I"."love"."you"."too");
Collections.sort(list, (s1, s2) -> { // Omit the parameter list type
	if (s1 == null)
		return -1;
	if (s2 == null)
		return 1;
	return s1.length() - s2.length();
});
Copy the code
  • The class inside the code root does the same thing
  • In addition to omitting interface and method names, parameter types in code can also be omitted
  • Because of javac’s type inference mechanism, the compiler is able to infer the type of a parameter based on context information

Collection

forEach

  • Enhanced for loop:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
for(String str : list) {
	if (str.length() > 3)
		System.out.println(str);
}
Copy the code
  • Using the forEach() method in conjunction with the anonymous inner class:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.forEach(new Consumer<String>(){
	@Override
	public void accept(String str) {
		if (str.length() > 3) { System.out.println(str); }}});Copy the code
  • Lambda expressions are implemented as follows:
// Use forEach() to iterate with Lambda expressions
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.forEach(str -> {
	if (str.length() > 3) { Systemm.out.println(str); }});Copy the code

The code above passes a Lambda expression to the forEach() method without knowing anything about the Accept () method or the Consumer interface, and the type derivation already does that

removeIf

  • The method signature:boolean removeIf(Predicate<? super E> filter);
    • Deletes all elements in the container that meet the filter criteria
      • Predicate is a function interface with a method to be implemented Boolean test(T T)
  • If you need in the process of iteration to delete containers must use iterators, otherwise will be throw ConcurrentModificationException.
  • Delete list elements using iterators:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
Iterator<String> it = list.iterator();
while (it.hasNext()) {
	if (it.next().length > 3) { it.remove(); }}Copy the code
  • Using the removeIf() method in conjunction with the anonymous inner class:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.removeIf(new Predicate<String>(){
	@Override
	public boolean test(String str) {
		return str.length() > 3; }});Copy the code
  • Using removeIf in conjunction with Lambda expressions:
Array<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.removeIf(str -> str.length() > 3);
Copy the code

Using Lambda expressions you don’t need to remember the name of the Predicate interface or the name of the test() method, just a Lambda expression that returns a Boolean type

replaceAll

  • The method signature:void replaceAll(UnaryOperator operator);
    • Performs the operation specified by operator on each element and replaces the original element with the result of the operation
      • UnaryOperator is a function interface where the method to be implemented T apply(T T)
  • Element substitution using subscripts:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
for (int i = 0; i < list.size(); i ++) {
	String str = list.get(i)
	if (str.length() > 3) { list.set(i, str.toUpperCase()); }}Copy the code
  • Using replaceAll in conjunction with anonymous inner classes:
ArrayList<String> list =new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.replaceAll(new UnaryOperator<>(String){
	@Override
	public String apply(String str) {
		if (str.length() > 3) {
			return str.toUpperCase();
		}
		returnstr; }});Copy the code

The code calls the replaceAll() method and implements the UnaryOperator interface using an anonymous inner class

  • Use Lambda expressions to implement:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.replaceAll(str -> {
	if (str.length > 3) {
		return str.toUpperCase();
	}
	return str;
});
Copy the code

sort

  • The method is defined inListIn the interface, the method signature:void sort(Comparator<? super E> c);
    • Sort the containers according to the comparison rules specified by C
      • Int compare(T o1, T o2)
  • Using the sort() method for Collections:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
Collections.sort(list, new Comparator<String>() {
	@Override
	public int compare(String str1, String str2) {
		returnstr1.length() - str2.length(); }});Copy the code
  • Use the list.sort () method directly, in conjunction with a Lambda expression:
ArrayList<String> list = new ArrayList<>(Arrays.asList("I"."love"."you"."too"));
list.sort((str1, str2) -> str1.length() - str2.length());
Copy the code

spliterator

  • The method signature:Spliterator spliterator();
    • Spliterator can iterate one iteration at a time, as Iterator does, or batch, which reduces the cost of iteration
    • Spliterators are separable, and a Spliterator can be split into two by calling the Spliterator trySplit() method. One is this and one is the newly returned element. The elements represented by these two iterators do not overlap
    • The load can be broken up with multiple calls to the spliterator.trysplit () method to facilitate multithreading

Stream and parallStream

  • Stream() and parallStream() return the Stream view representation of the container, respectively
  • ParallStream () returns a parallel Stream
  • Stream is the core class for Java functional programming

Map

forEach

  • The method signature:void forEach(BiConsumer<? super K,? super V> action);
    • rightMapEach mapping inactionoperation
      • BiConsumer is a function interface with a void accept(T T, U U) method;
  • Print all mappings in a Map using the pre-Java 7 method:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
for (Map.Entry<Integer, String> entry : map.entrySet()) {
	system.out.println(entry.getKey() + "=" + entry.getValue());
}
Copy the code
  • Using Map’s forEach() method, combined with the anonymous inner class:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
map.forEach(new BiConsumer<Integer, String>() {
	@Override
	public void accept(Integer k, String v) {
		System.out.println(k + "="+ v); }});Copy the code
  • Using Lambda expressions:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
map.forEach((k, v) -> System.out.println(k + "=" + v));
Copy the code

getOrDefault

  • The method signature:V getOrDefault(Object key, V defaultValue);
    • Query the Map for a value based on the given key. If no value is found, return defaultValue
  • Query the value of the specified key in the Map, or return NoValue if none exists:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
System.out.println(map.getOrDefault(4."NoValue"));
Copy the code

putIfAbsent

  • The method signature:V putIfAbsent(K key, V value);
    • The value specified by value is added to the Map only when there is no mapping with key value or the mapping value is null. Otherwise, the Map is not modified
    • The method combines judgment and assignment into one and is more convenient to use

remove

  • The method signature:remove(Object key);
    • Deletes a mapping from a Map based on the specified key value
  • The method signature:remove(Object key, Object value);
    • Delete the mapping only if the key in the current Map maps to value

replace

  • The method signature:replace(K key, V value);
    • Replace the original value with value only if a mapping of key exists in the current Map
  • The method signature:replace(K key, V oldValue, V newValue);
    • Only when the mapping of key in the current Map exists and is equal to oldValue, newValue is used to replace the original value. Otherwise, no operation is performed

replaceAll

  • The method signature:replaceAll(BiFunction<? super K, ? super V, ? extends V> function);
    • Perform function on each Map and replace the original value with the result of function execution
    • BiFunction is a function interface with a method R apply(T T, U U)
  • Convert all words in a Map to uppercase using Java 7 before:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
for (Map.Entry<Integer, String> entry : map.entrySet()) {
	entry.setValue(entry.getValue().toUpperCase());
}
Copy the code
  • Using the replaceAll method in conjunction with the anonymous inner class:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
map.replaceAll(new BiFunction<Integer, String, String>(){
	@Override
	public String apply(Integer k, String v) {
		returnv.toUpperCase(); }});Copy the code
  • Use Lambda expressions to implement:
HashMap<Integer, String> map = new HashMap<>();
map.put(1."one");
map.put(2."two");
map.put(3."three");
map.replaceAll(<k, v> -> v.toUpperCase());
Copy the code

merge

  • The method signature:merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction);
    • If the mapping for a key in a Map does not exist or is null, value is associated with a key
    • Otherwise, execute remappingFunction. If the result is not null, associate the result with the key. Otherwise, delete the mapping of the key in the Map
    • BiFunction is a function interface with a method R apply(T T, U U)
  • The merge() method is semantically complex, but it is used in a clear way. The classic usage scenario is to merge a new error message into the original message:
map.merge(key, newMsg, (v1, v2) -> v1 + v2);
Copy the code

compute

  • The method signature:compute(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction);
    • Associates the result of the remappingFunction calculation to the key, or if the calculation result is null, the key mapping is removed from the Map
  • Use the compute implementation to concatenate the new error message to the original message:
map.compute(key, (k, v) -> v == null ? newMsg : v.concat(newMsg));
Copy the code

computeIfAbsent

  • The method signature:V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction);
    • MappingFunction is called only if there is no mapping of key values in the current Map or if the mapping value is null. If the result of mappingFunction execution is not null, the result is associated with the key
    • Function is a Function interface in which to implement the method R apply(T T)
  • ComputeIfAbsent () is often used to establish an initial mapping of a key value in a Map. For example, when implementing a many-valued Map, the definition of Map might be Map< K, Set< V > >, and new values would be inserted into the Map:
Map<Integer, Set<String>> map = new HashMap<>();
if (map.containsKey(1)) {
	map.get(1).add("one");
} else {
	Set<String> valueSet = new HashSet<String>();
	valueSet.add("one");
	map.put(1, valueSet);
}
Copy the code
  • Use Lambda expressions to implement:
Map<Integer, Set<String>> map = new HashMap<>();
map.computeIfAbsent(1, v -> new HashSet<String>()).add("one");
Copy the code

computeIfPresent

  • The method signature:V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction);
    • The effect is opposite to computeIfAbsent()
    • If the result of remappingFunction execution is null, the mapping of the key is deleted. Otherwise, the original mapping of the key is replaced with the result
  • Java7 equivalent code before:
if(map.get(key) ! =null) {
	V oldValue = map.get(key);
	V newValue = remappingFunction.apply(key, oldValue);
	if(newValue ! =null) {
		map.put(key, newValue);
	} else {
		map.remove(key);
	}
	return newValue;
}
return null;
Copy the code

Streams API

  • stream:
    • Java functional programming hero
    • A stream is not a data structure, just a data source view
    • Here the data source can be:
      • An array of
      • Java container
      • I/O channel
  • streamIs a data source view that needs to be created by calling the corresponding tool methodstream:
    • Call the collection.stream () method
    • Call the Collection. ParallelStream () method
    • Call arrays.stream (T[] array)
  • Stream interface inheritance:

  • In the figure4Kind ofstreamInterfaces inherit fromBaseStream:
    • IntStream LongStream, DoubleStream corresponding to three basic types of int, long, double. Does not correspond to the corresponding package type
    • Stream corresponds to Stream views of all remaining types
  • Set different Stream interfaces for different data types:
    • To improve performance
    • Add specific interface functions
  • Although streams are generated by a container calling collection.stream (), streams differ from collections in the following ways:
    • No storage: A stream is not a data structure, just a view of a data source. Data sources can be arrays,Java containers, I/O channels, and so on
    • Functional programming: No modification to a stream modifies the data source behind it: filtering a stream, for example, does not delete the filtered element, but instead produces a new stream without the filtered element
    • Lazy execution: Operations on a stream are not executed immediately, but only when the result of stream execution is actually needed
    • Consumability: A stream can only be consumed once and is invalidated once iterated, just like a container’s iterator, which must be regenerated to iterate again
  • Operations on stream fall into two categories:
    • Intermediate operations. Intermediate operations are always executed lazily. Calling an intermediate operation only generates a new stream marked for the operation
    • End operations: Terminal operations. End operations trigger actual operations. When the calculation occurs, intermediate accumulated operations are pipelined to reduce the number of iterations. The stream is invalidated after the evaluation is complete
  • Stream interface common methods:
    • Intermediate operation:
      • concat()
      • distinct()
      • filter()
      • flatMap()
      • limit()
      • map()
      • peek()
      • skip()
      • sorted()
      • parallel()
      • sequential()
      • unordered()
    • End operation:
      • allMatch()
      • anyMatch()
      • collect()
      • count()
      • findAny()
      • findFirst()
      • forEach()
      • forEachOrdered()
      • max()
      • min()
      • noneMatch()
      • reduce()
      • toArray()
  • The difference between an intermediate operation and an end operation is the return value of the method:
    • Most of the operations that return stream are intermediate
    • Otherwise, the operation ends
  • The Stream method uses:
    • streamVery closely related to function interfaces, there is no function interfacestreamI can’t operate it
      • A functional interface is an interface that has only one abstract method inside it
      • Lambda expressions can be used anywhere a function interface appears

forEach

  • The method signature:void forEach(Consumer<? super E> action);
    • Performs the action specified for each element in the container, that is, iterates over the element
/* * Iterate with stream.foreach () */
 Stream<String> stream = Stream.of("I"."love"."you"."too");
 stream.forEach(str -> System.out.println(str)); 
Copy the code
  • Since forEach() is the terminating method, the above method executes immediately, printing out all strings

filter

  • The function prototype:Stream< T > filter(Predicate<? super T> predicate);
    • Returns a stream that contains only the elements of the predicate condition
/* * Preserves the string */ of length 3
 Stream<String> stream = Stream.of("I"."love"."you"."too");
 stream.filter(str -> str.length() == 3)
 	   .forEach(str -> System.out.println(str));
Copy the code
  • Print you and too strings of length 3
  • Because filter() is an intermediate operation, if you call filter() alone, there is no actual calculation and no output

distinct

  • The function prototype:Stream< T > distinct();
    • Return a Stream with duplicate elements removed
Stream<String> stream = Stream.of("I"."love"."you"."too"."too");
stream.distinct()
	  .forEach(str -> System.out.println(str));
Copy the code
  • Output the rest of the string after removing one too

sorted

  • There are two sorting functions:
    • Sorted by natural order: Stream< T > sorted();
    • Sorted using a custom Comparator: Stream < T > sorted(Comparator
      comparator);
Stream<String> stream = Stream.of("I"."love"."you"."too");
stream.sorted((str1, str2) -> str1.length() - str2.length())
	  .forEach(str -> System.out.println(str));
Copy the code
  • Output the sorted string in ascending length order

map

  • The function prototype:< R > Stream< R > map(Function<? super T, ? extends R> mapper);
    • Returns a Stream consisting of the results of mapper on all current elements
    • The number of elements in the Stream does not change before and after the conversion, but the type of the element depends on the type after the conversion
Stream<String> stream = Stream.of("I"."love"."you"."too");
stream.map(str -> str.toUpperCase())
	  .forEach(str -> System.out.println(str));
Copy the code
  • Prints the original string in uppercase

flatMap

  • The function prototype:< R > Stream< R > flatMap(Function<? super T, ? extends Stream<? extends R>> mapper);
    • The mapper specified operation is performed on each element, and the element of all the streams returned by the Mapper is returned as a new Stream
    • A new Stream is formed by “flattening” all the elements in the original Stream. The number and type of elements may change before and after the conversion
Stream<List<Integer>> stream = Stream.of(Arrays.asList(1.2), Arrays.asList(3.4.5));
stream.flatMap(list -> list.stream())
	  .forEach(i -> System.out.println(i)); 
Copy the code
  • The original stream has two elements: two lists < Integer >. After executing flatMap(), each List is “flattened” into a number, so that a stream consisting of five numbers is generated. So the final output will be 1 to 5 numbers

The Stream API senior

  • Reduction operation: reduction operation
    • Also known as a fold operation
    • The process of combining all elements into a summary result by a join action
    • Summing the elements, finding the maximum and minimum, finding the total number of elements, converting all the elements into a list or a set, are all reduction operations
  • There are two common protocol operations in the Stream library:
    • reduce()
    • collect()
  • There are also special reduction operations designed to simplify writing: sum(), Max (), min(), count(), and so on

reduce

  • Implementation generates a value from a set of elements
  • Sum (), Max (),min(),count(), and so on are reduce operations and are set as separate functions because they are often used
  • The method definition of reduce() can be overridden in three ways:
    • Optional< T > reduce(BinaryOperator< T > accumulator);
    • T reduce(T identity, BinaryOperator< T > accumulator);
    • < U > U reduce(U identity, BiFunction< U, ? super T,U> accumulator, BinaryOperator< U > combiner);
    • Function definitions are getting longer, but the semantics remain the same. Multiple arguments are used to specify the initial value (identity) or to specify the combiner of multiple partial results in parallel execution.
  • Find the longest word in a list. Here “big” means “long “:
    /* * Find the longest word */
    Stream<String> stream = Stream.of("I"."love"."you"."too");
    Optional<String> longets = stream.reduce((s1, s2) -> s1.length() >= s2.length() ? s1 : s2);
    // Optional<String> longest = stream.max((s1, s2) -> s1.length() - s2.length());
    System.out.println(longest.get());
    Copy the code
    • Pick the longest word love
    • Optional is a container with only one value. Using Optional avoids null values
  • Find the sum of the length of a group of words. This is a sum operation with an input of type String and a result of type Integer:

/* * find the sum of the words */
 Stream<String> strean = Stream.of("I"."love"."you"."too");
 Integer lengthSum = stream.reduce(0.// Initial value (1)
 								  (sum, str) -> sum +str.length(),		// accumulator (2)
 								  (a, b) -> a + b);		// Part and concatenation.
// int lengthSun = stream.mapToInt(str -> str.length()).sum();
System.out.println(lengthSum);
Copy the code
  • The above code(2)Accumulator at:
    • String mapping growth
    • And add to the current summation
    • Using the reduce() function to combine these two steps into one is even better for performance
  • You can also use a combination of map() and sum()

collect

  • Reduce () has the advantage of generating a value, but if you want to generate a collection or complex object such as a Map from a Stream, use collect()
  • Example:
/* * Convert Stream to container or Map */
Stream<String> stream = Stream.of("I"."love"."you"."too");
List<String> list = stream.collect(Collectors.toList());
Set<String> set = stream.collect(Collectors.toSet());
// Convert Stream to Map
Map<String, Integer> map = stream.collect(Collectors.toMap(Function.identity(), String::length));
Copy the code
  • Convert Stream to List,Set, and Map, respectively
  • Note the following:
    • Function.identity()
    • String::length
    • Collectors

Static and default methods for the interface

  • FunctionIt’s an interface,Function.identity()The meaning is twofold:
    • Java 8 allows concrete methods to be added to interfaces. There are two specific methods in the interface:
      • Identity () is a static method of the Function interface
      • Default: indicates the default method
    • Function.identity(): Returns a Lambda expression object whose output is the same as the input, equivalent to a Lambda expression of the form t -> t

Prior to Java 7, it was difficult or impossible to add new abstract methods to a defined interface because all classes that implemented the interface would have to be reimplemented. The default method in Java 8 is designed to solve this problem by implementing the new method directly in the interface. After the default method is introduced, static methods can be added to avoid specialized utility classes

Method references
  • The syntax of String::length is called method reference and is used as an alternative to certain forms of Lambda expressions
  • If all a Lambda expression is about is calling an existing method, you can replace the Lambda expression with a method reference
  • Method references can be divided into four categories:
    • Reference static method: Integer :: sum
    • Methods that reference an object: list :: add
    • A method that references a class: String :: Length
    • Reference constructor: HashMap :: new

Collector

  • The Collector Collector is a utility class built for the stream.collect method
  • Converting a Stream to a container or Map requires at least two considerations:
    • What is the target container: ArrayList,HashSet, or TreeMap
    • How new elements are added to the target container: list.add () or map.put ()
    • If the specification is carried out in parallel, how can collect() combine multiple partial results into one
  • collect()Method definition:< R > R collect(Supplier< R > supplier, BiConsumer<R, ? super T> accumulator, BiConsumer<R, R> combiner);
    • Three parameters at a time correspond to the three analyses above
    • It is cumbersome to pass these three parameters in each call to collect(), so the Collector Collector is used to simply encapsulate the three parameters
    • < R, A> R collect(Collecor
      collector);
  • The Collectors tool class can generate a variety of common Collectors using static methods
  • Thus, Stream can be specified as a List in one of two ways:
/* * specify Stream as List */
 Stream<String> stream = Stream.of("I"."love"."you"."too");
 
 List<String> list1 = stream.collect(ArrayList :: new, ArrayList :: add, ArrayList :: addAll);
 System.out.println(list1);

 List<String> list2 = stream.collect(Collectors.toList());
 System.out.println(list2);
Copy the code
  • Normally, you do not need to manually specify the three parameters of collect(). Instead, call collect(Collector
    Collector) method, and collector objects in the parameter are obtained directly from the Collectors tool class
  • The behavior of the collector actually passed in determines the behavior of collect()

Use collect() to generate a Collection

  • Converting a Stream to a List and Set is the most common operation to use the collect() method to convert a Stream to a container
  • Collectors are provided in the Collectors tool class:
/* * Convert Stream to List or Set */
 Stream<String> stream = Stream.of("I"."love"."you"."too");
 
 List<String> list = stream.collect(Collectors.toString());
 Set<String> set = stream.collect(Collectors.toSet());
Copy the code
  • Since the return result is an interface type, it is not clear what container type the library actually chooses
  • Sometimes you need to manually specify the actual container type. This can be done through Supplier< C > collectionFactory:
/* * Use toCollection to specify the container type */
 ArrayList<String> arrayList = stream.collect(Collectors.toCollection(ArrayList :: new));
 HashSet<String> hashSet = stream.collect(Collectors.toCollection(HashSet :: new));
Copy the code
  • Specify the specification results as ArrayList and HashSet, respectively

Use collect() to generate a Map

  • A Stream depends on some kind of data source, which can be an array, a container, etc., but not a Map
  • But you can generate a Map from a Stream. What you need to do is determine what the key and value of the Map represent
  • Collect () usually results in a Map in three cases:
    • Collectors generated using Collectors. ToMap () : You need to specify how to generate Map keys and values
    • Use Collectors. PartitioningBy () generated by the collector: the second partition on elements for operation when used
    • Collectors generated with Collectors. GroupingBy () : Used when performing group operations on elements
  • Collector generated using toMap() :
    • This is a parallel method to Collectors. ToCollection ()
    • Example: Convert a student list to a Map composed of < student, GPA>
/* * Use toMap() to count students' GPA */
 Map<Student, Double> studentToGPA = student.stream().collect(Collectors.toMap(Function.identity(),	// How to generate key
 																				student -> computeGPA(student)));	// How to generate value
Copy the code
  • Collector generated with partitioningBy() :
    • Applies to splitting elements in a Stream into complementary intersecting parts based on some binary logic (Boolean yes, no)
    • Example: Divide the students into passing and failing sections
/* * Divide students' grades into passing and failing parts */
 Map<Boolean, List<Student>> passingFailing = students.stream().collect(Collectors.partitioningBy(s -> s.getGrade() >= PASS_THRESHOLD));
Copy the code
  • Collector generated using groupingBy() :
    • This is a more flexible one, similar to the group by statement in SQL
    • GroupingBy also groups data by attribute, and elements with the same attribute are mapped to the same key in the Map
    • Example: Group employees by department
/* * Group employees by department */
 Map<Department, List<Employee>> byDept = employees.stream().collect(Collectors.groupingBy(Employee :: getDepartment));
Copy the code
  • Sometimes just grouping is not enough. Group by is used in SQL to facilitate more advanced queries:
    • Start by grouping employees by department
    • Then count the number of employees in each department
    • An enhanced version of groupingBy() satisfies this need:
      • The enhanced **groupingBy()** allows you to group elements before performing certain operations, such as summing, counting, averaging, type conversions, etc
      • Such a collector that groups elements first is called an upstream collector
      • The collector that then performs the grouped operations is called the downstream collector
/* * Use the downstream collector to count the number of people in each department */
 Map<Department, Integer> totalByDept = employees.stream()
 												 .collect(Collectors.groupingBy(Employee :: getDepartment,
 												 								Collectors.counting()));
Copy the code

Similar to SQL, groupingBy is also highly unstructured

  • Downstream collectors can also contain further downstream collectors:
    • Group employees by department
    • Get a string of names for each Employee instead of individual Employee objects
/* * Group employees by department and keep only the employee's name */
 Map<Department, List<String>> byDept = employees.stream()
 												 .collect(Collectors.groupingBy(Employee :: getDepartment,
 												  Collectors.mapping(Employee :: getName,
 												 					Collectors.toList())));
Copy the code

Use collect() for the string join

  • The Collectors generated by Collectors. Joining () are used to join strings instead of the for loop
  • There are three rewriting forms for the Collectors. Joining () method, which correspond to three different joining modes:
/* * Use the Collectors. Joining () concatenation string */
 Stream<String> stream = Stream.of("I"."love"."you");

String joined = stream.collect(Collectors.joining());	// Iloveyou
String joined = stream.collect(Collectors.joining(","));	// I,love,you
String joined = stream.collect(Collectors.joining(","."{"."}"));	// {I,love,you}
Copy the code
  • In addition to Collectors encapsulated by the Collectors tool class, you can also customize Collectors. Or call collect(Supplier< R > Supplier, BiConsumer

    Accumulator, BiConsumer

    combiner) method to collect any form of information required
    ,>
    ,?>

Stream Pipelines

  • By using a query raised in the Stream API:
    • How is such a powerful Stream API implemented?
    • How does Pipeline execute, does it iterate over each invocation?
    • How does automatic parallelism work? How many threads?
  • The way Lambda expressions are executed by containers – take the arrayList.foreach () method as an example:
/* * ArrayList.forEach() */
 public void forEach(Consumer<? super E> action) {...for (int i = 0; modCount == expectedModCount && i < size; i ++) {
 		// Callback methodaction.accept(elementData[i]); }... }Copy the code
  • The main logic of the arrayList.foreach () method is a for loop in which the action.accept() callback is constantly called to iterate over elements
  • The callback method is widely used in Java GUI listeners, and Lambda expressions act as a callback method
  • Stream APIIs widely used inLambdaExpression as a callback method. But want to understandStream,The key is:
    • Assembly line
    • Automatic parallel
int longestStringLengthStaringWithA = strings.stream().filter(s -> s.startsWith("A"))
													  .mapToInt(String :: length)
													  .max();
Copy the code
  • The above code is used to find a letter“A”Maximum length of a string at the beginning:
    • One straightforward way to do this is to perform an iteration for each function call. While this is functional, it is not acceptable for efficiency
    • The library implementation uses a Stream Pipeline approach that cleverly avoids multiple iterations. The basic idea is to perform as many user-specified actions as possible in an iteration
  • Related operations in Stream:
    • Intermediate operation: Intermediate operations
      • Stateless: Stateless
        • unordered()
        • filter()
        • map()
        • mapToInt()
        • mapToLong()
        • mapToDouble()
        • flatMap()
        • flatMapToInt()
        • flatMapToLong()
        • flatMapToDouble()
        • peek()
      • Stateful: Stateful
        • distinct()
        • sorted()
        • limit()
        • skip()
    • End operation: Terminal operations
      • Short circuit operation: short-circuiting
        • anyMatch()
        • allMatch()
        • noneMatch()
        • findFirst()
        • findAny()
      • Non-short-circuit operation:
        • forEach()
        • forEachOrdered()
        • toArray()
        • reduce()
        • collect()
        • max()
        • min()
        • count()
  • All operations on Stream fall into two categories:Because the Stream underlayer handles each case differently, it needs to be finely divided
    • Intermediate operation:An intermediate operation is just a token
      • Stateless: The processing of an element is not affected by previous elements, and the result is immediately known after processing an element
      • Stateful: The processing of an element is affected by other elements and the result is not known until all elements are processed
    • End operation:The actual calculation is triggered only when the operation is finished
      • Short-circuit operation: returns results without processing all elements
      • Non-short-circuit operation: the operation returns results after all elements are processed

Stream Pipeline implementation scheme

  • A straightforward Stream Pipeline implementation scheme:

  • Find the length of the longest string:
    • A straightforward way to do this is to perform an iteration for each function call and put the processing of intermediate results into some kind of data structure, such as an array, container, etc
      • Execute immediately after calling the filter() method
      • Select all strings beginning with A and place them in A list, List1
      • Then let list1 be passed to the mapToInt() method and executed immediately
      • The generated results are placed in list2
      • Walk through list2 one last time to find the largest number as the final result
    • This implementation method is simple and intuitive, but there are two obvious defects:
      • Multiple iterations: The number of iterations equals the number of calls to the function
      • Frequently produced intermediate results: Each function call produces an intermediate result, which is expensive to store
  • A way to find the longest string length in an iteration without using the Stream API:
int longest = 0;
for (String str : strings) {
	if (str.startsWith("A")) {	// Similar to filter(), keep strings starting with A
		int len = str.length();	// Like mapToInt(), get the length of the stringlongest = Math.max( ); }}Copy the code
  • This approach not only reduces the number of iterations, but also avoids storing intermediate results. This is the Stream Pipeline. Three operations are placed in an iteration
    • As long as you know the intent in advance, you can always implement the equivalent of the Stream API in the way described above

Stream Pipeline solution

  • Since the designer of the Stream library does not know the user’s intent, how can the designer of the Stream library take into account the fact that the user’s behavior cannot be assumed?
  • About this solution, yesThe user’s operations of each step are recorded in a certain way. When the user calls to end the operation, the previous recorded operations are superimposed together and all the operations are executed in one iteration.Four issues need to be addressed with respect to this solution:
    • How to record user operations?
    • How do operations stack up?
    • How to perform the operation after stacking?
    • Where are the results of the implementation?

How are operations recorded?

  • The operation here refers to the Stream intermediate operation
  • Many Stream operations will require a Lambda callback, so a complete operation would be a ternary:
    • < data source, operation, callback function >
  • A Stream uses the concept of Stage to describe a complete operation and is represented by an instantiated PipelineHelper, which connects the sequential stages together to form the Stream Pipeline
  • Stream class and interface inheritance diagram:

  • IntPipeline LongPipeline, DoublePipeline three classes are designed for three basic types, rather than custom packaging type, and ReferencePipeline are parallel relationship
  • Head is used to represent the first Stage, the Stage generated by calling methods such as collection.stream (), which obviously contains no operations
  • StatelessOp and StatefulOp represent stateless and stateful stages, respectively, corresponding to stateless and stateful operations
  • Schematic diagram of Stream Pipeline organization structure:

  • The Head, Stage0, is obtained by the collection.stream () method, followed by a series of intermediate operations that constantly produce new streams
  • These Stream objects are organized together in the form of a bidirectional linked list, forming the entire pipeline. Since each Stage records the callback function of the previous Stage and this operation, operations on all data sources can be established based on this data structure. This is how Stream records operate

How do operations stack up?

  • Through the above method to solve the operation record problem, in order to make the Stream Pipeline play its due role requires a solution to stack all the operations together
  • Because only the current Stage itself knows how to perform the operations it contains. The previous Stage does not know which operation is performed by the later Stage and which form the callback function is. Therefore, it is not possible to execute each operation and callback function sequentially from the Stream Pipeline Head to achieve operation superposition
  • In order to solve the above problems, some protocol is needed to coordinate the call relationship between adjacent stages, and this protocol is completed by the Sink interface, which contains methods as follows:
The method name role
void begin(long size) Call this method to tell Sink to get ready before you start iterating through the elements
void end() Called after all elements are traversed to inform Sink that there are no more elements
boolean cancellationRequested() Whether the operation can be ended, so that the short-circuit operation can be ended as soon as possible
void accept(T t) Called when iterating over an element, receives a pending element and processes the element.

Stage encapsulates its own contained operations and callback methods into this method,

The previous Stage simply calls the current stage.accept (T T) method
  • Through the Sink protocol, adjacent stages can be easily called. Each Stage will encapsulate its own operation into a Sink. The former Stage only needs to call the accept() method of the last Stage, without knowing how the Stage is handled internally
  • forStateful operation,Sinkthebegin()end()Methods must be implemented:
    • For example, stream.sorted () is a stateful intermediate operation
    • The corresponding sink.begin () method might create a container to hold the results
    • The Accept () method is responsible for adding elements to the container
    • Finally, end() is responsible for sorting containers
  • forShort circuit operation, Sink. CancellationRequest ()Must be achieved:
    • For example, stream.findfirst () is a short-circuit operation
    • As soon as an element is found,cancellationRequested() should return true so that the caller can end the search as soon as possible
  • Sink’s four interface methods collaborate with each other to complete computational tasks. In fact, the essence of the Stream API’s internal implementation is how to override Sink’s four interface methods
  • {begin(), accept(), cancellationRequested(), End ()}
  • Example: A possible sink.accept () method flow
Void accept(U U) {1. Use the current Sink wrapped callback to process U 2. Pass processing results to Pipeline downstream Sink}Copy the code
  • The methods of Sink interface are implemented in accordance with the model of [process -> forward]
  • Example: how does the intermediate operation of Stream wrap its operation as Sink and how does Sink forward the result to the next Sink
/* * stream.map (), which produces a new Stream */
public final <R> Stream<R> map(Function<? super P_OUT, ? extends R> mapper) {...return new StatelessOp<P_OUT, R>(this, StreamShape.REFERENCE, StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT) {
		@override
		/* * The opWripSink() method returns Sink */ wrapped with the callback function
		 Sink<P_OUT> opWrapSink(int flags, Sink<R> downStream) {
		 	return new Sink.ChainedReference<P_OUT, R>(downStream) {
		 		@Override
		 		public void accept(P_OUT u) {
		 			// use the current Sink wrapped callback function mapper to process u
		 			R r = mapper.apply(u)
		 			// Pass the processing result to the downstream Sink of the assembly linedownstream.accept(r); }}; }}; }Copy the code
  • The callback function is calledmapperPack into oneSinkIn:
    • Stream.map() is a stateless intermediate operation, so the map() method returns a StatelessOp inner class object, a new Stream
    • Calling the opWripSink() method of the new Stream gets a Sink wrapped around the current callback function
  • Example:
    • The stream.sorted () method sorts the elements in the Stream
    • This is a stateful intermediate operation because the final order cannot be obtained until all elements have been read
    • Sorted () encapsulates the Sink as follows:
/* * Sink in stream.sort () implements */
 class RefSortingSink<T> extends AbstractRefSortingSink<T> {
    // Store the elements used for sorting
 	private ArrayList<T> list;

	RefSortingSink(Sink<? super T> downstream, Comparator<? super T> comparator) {
		super(downstream, comparator);
	}

	@Override
	public void begin(long size) {...// Create a list of sorted elements
		list = (size > 0)?new ArrayList<T>((int)size) : new ArrayList<T>();
	}

	@Override
	public void end(a) {
		// The sorting cannot begin until all elements have been received
		list.sort(comparator);
		downstream.begin(list.size());
		
		if(! cacellationWasRequested()) {// If the downstream Sink does not contain short-circuit operation, the processing result will be transmitted to the downstream Sink of the assembly line
			list.forEach(downstream :: accept);
		} else {
			/* * If the downstream Sink contains a short-circuit operation: * Each time cancellationRequested() is called to ask if processing can be terminated */
			 for (T t : list) {
			 	if (down.cancellationWasRequested()) {
			 		break;
			 	}
			 	// Pass the processing result to the downstream Sink of the assembly line
			 	downstream.accept();
			 }
		}
		downstream.end();
		list = null;
	}

	@Override
	public void accept(T t) {
		/* * Use the current Sink wrapper action to handle: * Add elements to the middle list */list.add(t); }}Copy the code
  • SinkFour interface methods in:
    • First, begin() gets the number of elements involved in sorting and passes it to Sink. Convenient to determine the size of the intermediate result container
    • The element is then added to the intermediate result through the accept() method, which the caller calls again and again at execution time until all elements are iterated
    • Finally, the end() method returns to Sink after all elements are traversed, the sorting step is started, and the result will be transmitted to downstream Sink after the sorting is completed
    • If the downstream Sink is short-circuited, the downstream cancellationRequested() is continuously asked if it can end the processing when the result is passed to the downstream

How is the overlay operation performed?

  • SinkEncapsulates theStreamFor each step of the operation, and use[Processing -> Forward]Mode to overlay operation. Once you call a certainEnd the operation,It triggers the execution of the entire pipeline
    • There will be no other operation after the end operation, so the end operation will not create a new pipeline Stage. The assembly line’s list doesn’t go any further
    • At the end of the operation, a Sink wrapped with its own operation will be created. This is the last Sink, and there will be no downstream Sink. So this Sink only needs to process the data without passing the results to the downstream Sink
    • For Sink’s [process -> forward] model, the Sink that ends the operation is the exit of the call chain
  • Upstream Sink how to find downstream Sink:
    • One solution is to set up a Sink field in PipelineHelper and access the Sink field by locating the downstream stages in the pipeline
    • inStreamSet aSinkAbstractPipeline.opWrapSink(int flag, Sink downstream)Method to getSink.Functions of this method:
      • Returns a new operation that contains the current Stage representative and a Sink object that can deliver the result to downstream
  • Use a new Sink object instead of returning a Sink field:
    • Because using opWrapSink(), you can combine the current operation with the downstream parameter of the downstream Sink to make a new Sink
    • In this way, as long as the opWrapSink() method of the previous Stage is continuously called from the last Stage of the pipeline until the very beginning (excluding stage0, which represents data source and does not contain operations), a Sink representing all operations on the pipeline can be obtained
/ * * * AbstractPipeline wrapStack () : * from downstream to upstream packaging Sink, if the original incoming Sink on behalf of the end of the operation, the function returns when you can get a representative assembly line all Sink * / operation
final <P_IN> Sink<P_IN> wrapSink(a) {...for (AbstractPipeline p = AbstractPipeline.this; p.depth > 0; p = p.previousStage) {
		sink = p.opWrapSink(p.previousStage.combinedFlags, sink);
	}
	return (Sink<P_IN>) sink;
}
Copy the code
  • On the assembly line Stage, all operations from start to finish are packaged into a Sink, and executing this Sink is equivalent to executing the entire assembly line:
/ * * AbstractPipeline copyInto () : * to the operation of the spliterator representative data to execute wrappedSink representative * /
 final <P_IN> void copyInto(Sink<P_IN> wrappedSink, Spliterator<P_IN> spliterator) {...if(! StreamOpFlag.SHORT_CIRCUIT.isKnown(getStreamAndOpFlags)) {// The notification starts to run through the resume
 		wrappedSink.begin(spliterator.getExactSizeIfKnown());
 		/ / iteration
 		spliterator.forEachRemaining(wrappedSink);
 		// Notification traversal is completewrappedSink.end(); }... }Copy the code

The above code first call wrappedSink. The begin () method tells Sink data is coming, and then call Spliterator iterator Spliterator. ForEachRemaining () Method iterates on the data, and finally calls wrappedssin.end () to notify Sink of the end of data processing

The result position after execution

  • Not all of themStreamAny action that ends requires a result, and some actions are just for the use of side effectsSide-effects :
    • For example, the stream.foreach () method printing out the results is a common side effect scenario
    • In fact, all scenarios other than printing should avoid side effects
    • Side effects cannot be abused because correctness and efficiency of use cannot be guaranteed because streams execute in parallel
    • Most places where side effects are used can be done more safely and efficiently using reduction operations
/ / = = = = = = = = = = = = = = = = = = = = = = = = error collection way = = = = = = = = = = = = = = = = = = = = = = = =
ArrayList<String> results = new ArrayList<>();
stream.filter(s -> pattern.matcher(s).matches()).forEach(s -> results.add(s));

/ / = = = = = = = = = = = = = = = = = = = = = = = = error collection right way = = = = = = = = = = = = = = = = = = = =
List<String> results = stream.filter(s -> pattern.matcher(s).matches()).collect(Collectors.toList());
Copy the code
  • The pipeline results that need to be returned are stored in different locations depending on the Stream termination operation:

  • For the operation that returns Boolean or Optional(container for storing a value) in the table, because it returns a value, you only need to record the value in the corresponding Sink and return it at the end of execution
  • Collect (), reduce(), Max (), and min() are all protocol operations. Although Max () and min() also return an Optional option, the underlying implementation is actually done through the reduce() method
  • In the case of a return array, the result is placed in the array. But before finally returning the array, the result is stored inNodeData structure of:
    • Node is a multi-fork tree structure where elements are stored in the leaves of the tree, and a leaf Node can hold multiple elements. It’s easy to execute

conclusion

  • To be able to use a Lambda expression, you must have a function interface that responds (an interface that has only an abstract method inside it)
  • Lambda expressions are primarily used to define the method type interfaces that are executed within a line
  • Lambda expressions dispense with the hassle of anonymous methods and anonymous inner classes, giving Java simple and powerful functional programming capabilities