“This is the fifth day of my participation in the First Challenge 2022. For details: First Challenge 2022”
preface
Hello everyone, MY name is Kano, a full stack full stick engineer!
About Lambda, through the description of “Lambda must know and must know”, “Lambda built-in functional interface” and “Lambda method reference”, I believe we have a basic understanding of Lambda, and will certainly be able to use it in the project!
Collections are one of the most common data structures we use in our development projects! But before Java8, the operation of collections was not very good, such as grouping, maximizing, collecting an attribute of an object, etc., we generally filter through SQL syntax, or through Java for loop processing, this way is quite annoying. Presumably the designers of Java8 were aware of this problem, which is why they brought us streams. In this chapter, Stream will be combined with the previous Lambda syntax. I hope that by reading this chapter, I will not only learn the Stream related operations, but also be more proficient in Lambda.
Introduction of the Stream
A stream is a sequence of elements generated by transforming a set of raw data sources. It supports data processing operations (similar to some database operations) as well as operations commonly used in other functional programming languages, such as Filter, map, etc. The main purpose of a stream is to perform expressive calculations.
Stream life cycle
I refer to the flow process as the life cycle, which is similar to the production of a bottle of mineral water: get the water -> go to remove impurities -> go to add beneficial minerals -> bottle
- Obtain water (prepare raw data) : generate the element sequence of the stream;
- Enter workbench (multiple workbench) : intermediate operations of a flow, which can be joined by multiple intermediate operations, and the result returned by the intermediate operations is still a flow;
- Bottling: Terminal operation of a stream that returns no value, or returns a non-stream result.
The Stream operation
Create a flow
Common ways to create streams include: collections.stream (), stream.of (args)
Create an initial stream with three attributes (values) : name (Kano 1 to Kano 5), age (11 to 15), and gender (0/1), and base subsequent operations on this data
/** * Initializes the user element sequence *@return* /
public Stream<User> userStream(a) {
// Initialize the collection data
List<User> users = new ArrayList<>();
for (int i = 0; i < 5; i++) {
users.add(new User("Kano" + (i + 1), 10 + (i + 1), i % 2));
}
// Build a sequence of elements through metadata builds
return users.stream();
}
Copy the code
In the middle of operation
The filter filter
Takes a Predicate expression and filters the data that meets the criteria
public void testFilter(a){
userStream()
.filter(user -> user.getAge() > 14) // Select users older than 14 years old
.forEach(System.out::println); User(name= kano5, age=15, gender=0)
}
Copy the code
Note: forEach is a terminal operation
The sort order
Sorted (), sorted(Comparator
comparator)
public void testSort(a){
userStream()
.sorted((u1, u2) -> u2.getAge() - u1.getAge()) // Add a comparator, in descending order by age
.forEach(System.out::println); // go through all of them in reverse order
}
Copy the code
Limit truncation stream
Gets the specified number of elements, or if the specified number is exceeded
userStream()
.sorted((u1, u2) -> u2.getAge() - u1.getAge())// Add a comparator, in descending order by age
.limit(1) // Get the first sorted data
.forEach(System.out::println); User(name= kano5, age=15, index=0)
Copy the code
The skip skip
Skips a specified number of elements, or returns an empty stream if the specified number is exceeded
public void testSkip(a){
userStream()
.skip(4) // Skip the first four
.forEach(System.out::println); User(name= kano5, age=15, index=0)
}
Copy the code
Distinct to heavy
If the element is an object, it is de-weighted according to the object’s HashCode and equals
public void testDistinct(a){
Stream.of(1.1.3.2.3)
.distinct() / / to heavy
.forEach(System.out::print); / / 132
}
Copy the code
The map mapping
Transforms an element in the stream into a new element by operation, accepting a Function expression
public void testMap(a){
userStream()
.map(User::getName) // map all names
.forEach(System.out::print); // Cano 1 Cano 2 Cano 3 Cano 4 Cano 5
}
Copy the code
FlatMap mapping
Turn elements into new flows, and then turn multiple new flows into one
public void testFlatMap(a){
// Build the name as a Stream
Function<User, Stream<String>> flatMapFunction = user -> Stream.<String>builder().add(user.getName()).build();
// map
userStream()
.map(flatMapFunction) // Receive a Stream
.forEach(System.out::println); // java.util.stream.ReferencePipeline$Head@7a1ebcd8
// flatMap
userStream()
.flatMap(flatMapFunction) // Receive a Stream
.forEach(System.out::print); // Cano 1 Cano 2 Cano 3 Cano 4 Cano 5
}
Copy the code
As you can see from the above example, map returns a Stream whose name is built into a Stream, and flatMap tils the generic values of the returned Stream into a new sequence of data elements
Terminal operation
- ForEach iterates through elements
public void testForEach(a){
userStream().forEach(System.out::println);
}
Copy the code
Count Indicates the total number of statistics.
Similar to count in SQL, statistics
public void testCount(a){
long count = userStream().count();
System.out.println(count); / / 5
}
Copy the code
Max Obtain the maximum value
The Max parameter is a comparator function and the return type is Optional.
public void testMax(a) {
Integer max = Stream.of(1.2.3.4).max(Comparator.comparingInt(t -> t)).get();
System.out.println(max); / / 4
}
Copy the code
Note: Comparator.comparingInt(t -> t) is equivalent to t1, t2) -> T1-t2.
Min gets the minimum value
This is the same as Max
public void testMin(a) {
Integer min = Stream.of(1.2.3.4).min(Comparator.comparingInt(t -> t)).get();
System.out.println(min); / / 1
}
Copy the code
FindFirst gets the first one
The return type is Optional, or empty Optional if the value does not exist
public void testFindFirst(a) {
Optional<User> firstUser = userStream().findFirst();
System.out.println(firstUser.get()); // User(name= Kano 1, age=11, gender=0)
}
Copy the code
FindAny gets either one
Fetch any one of these using the same method as findFirst (serial stream usually returns the first one, parallel stream is random)
public void testFindAny(a) {
/ / the serial stream
Optional<Integer> findAny = Stream.of(1.2.3.4).findAny();
System.out.println(findAny.get());
/ / parallel flows
Optional<Integer> findAny2 = Stream.of(1.2.3.4).parallel().findAny();
System.out.println(findAny2.get());
}
Copy the code
Note: Parallel () transforms serial streams into parallel streams
AllMatch, anyMatch, noneMacth
Receives an expression of type Predicate
- AllMacth all matches
public void testAllMatch(a) {
// Check whether all elements are greater than 3
Predicate<Integer> predicate = (i) -> i > 3;
boolean allGt3 = Stream.of(1.2.3.4).allMatch(predicate);
System.out.println(allGt3); // false
allGt3 = Stream.of(4.5.6).allMatch(predicate);
System.out.println(allGt3); // true
}
Copy the code
- AnyMatch Matches at least one
public void testAnyMatch(a) {
// Determine if there is an element greater than 3
Predicate<Integer> predicate = (i) -> i > 3;
boolean anyGt3 = Stream.of(1.2).anyMatch(predicate);
System.out.println(anyGt3); // false
anyGt3 = Stream.of(4.5.6).anyMatch(predicate);
System.out.println(anyGt3); // true
}
Copy the code
- NoneMacth did not match any of them
public void testNoneMatch(a) {
// Determine if there is an element greater than 3
Predicate<Integer> predicate = (i) -> i > 3;
boolean anyGt3 = Stream.of(1.2).noneMatch(predicate);
System.out.println(anyGt3); // true
anyGt3 = Stream.of(4.5.6).noneMatch(predicate);
System.out.println(anyGt3); // false
}
Copy the code
Reduce reduction
The sequence of elements is repeatedly combined with the specified algorithm to obtain a result, which is generally used to calculate sum, etc. Reduce is an overloaded method that contains single parameter, double parameter, and three parameter
-
Accumulator (T identity, BinaryOperator
Accumulator)
- In a two-parameter, the first input parameter can be considered an initialization data
public void testReduce(a) {
/ / sum
Optional<Integer> reduceOptional = Stream.of(1.2).reduce(Integer::sum);
System.out.println(reduceOptional.get()); / / 3
// add an initial value to the calculation
Integer reduce = Stream.of(1.2).reduce(1, Integer::sum);
System.out.println(reduce); / / 4
}
Copy the code
-
Reduce (U identity, BiFunction
Accumulator, BinaryOperator
combiner
,?>- The three parameters are special. The third parameter is not fired in serial flow and has the same effect as a double parameter, but in parallel flow it is used for merge calculation. Let’s look at the example directly
public void testReduce2(a) {
// Parallel flow three parameters
Integer sum = Stream.of(1.2.3).parallel().reduce(4, (t1, t2) -> {
String log = String.join("|"."Intermediate parameter", Thread.currentThread().getName(), t1 + "+" + t2);
System.out.println(log);
return t1 + t2;
}, (t1, t2) -> {
String log = String.join("|"."The third parameter", Thread.currentThread().getName(), t1 + "+" + t2);
System.out.println(log);
return t1 + t2 ;
});
System.out.println(sum); / / 5
// Serial stream three parameters
sum = Stream.of(1.2.3).reduce(4, (t1, t2) -> {
String log = String.join("|"."Intermediate parameter", Thread.currentThread().getName(), t1 + "+" + t2);
System.out.println(log);
return t1 + t2;
}, (t1, t2) -> {
System.out.println("The third parameter is executed.");
return t1 + t2 ;
});
System.out.println(sum); / / 3
}
Copy the code
The output is:
Intermediate parameter | main |4 + 2Parameter | ForkJoinPool.com monPool - worker - in the middle9 | 4 + 1Parameter | ForkJoinPool.com monPool - worker - in the middle2 | 4 + 3The third parameter | monPool - worker - ForkJoinPool.com2 | 6 + 7The third parameter | monPool - worker - ForkJoinPool.com2 | 5 + 13
18Intermediate parameter | main |4 + 1Intermediate parameter | main |5 + 2Intermediate parameter | main |7 + 3
10
Copy the code
As you can see from the output above, the second parameter in the parallel stream is added to each element of the stream separately using the initial value, and the result is then given to the third parameter to merge. It looks strange. Generally, I only use single and double parameters in the reduction operation.
Collect collect
Collect the results into multiple types: Common methods include collect(Collectors. ToList ()), collect(Collectors. ToSet ()), and collect(Collectors. Collect (Collectors. GroupingBy ()), more see Java. Util. Stream. The Collectors provide the way!
public void testCollect(a) {
// collect(Collectors. ToList ())
List<String> nameList = userStream().map(User::getName).collect(Collectors.toList());
System.out.println(nameList); // [Cano 1, Cano 2, Cano 3, Cano 4, Cano 5]
// collect(Collectors. ToSet ()) Collect it as set
Set<Integer> set = Arrays.asList(1.2.3.1).stream().collect(Collectors.toSet());
System.out.println(set); / / [1, 2, 3]
// collect(Collectors. ToMap ()) Collect it as a map. If the key repeatedly needs to use the three parameters of toMap, set it to overwrite
Map<String, User> firstUserMap = userStream().limit(1).collect(Collectors.toMap(User::getName, Function.identity()));
System.out.println(firstUserMap); // {kano1 =User(name= kano1, age=11, gender=0)}
// collect(Collectors. GroupingBy ()) Groups the Collectors by name
Map<String, List<User>> collect = userStream().limit(1).collect(Collectors.groupingBy(User::getName));
System.out.println(collect); // {kano1 =[User(name= kano1, age=11, gender=0)]}
}
Copy the code
The above lists only the common Stream operations. Lambda has specialized built-in function interfaces, as does Stream, for example: IntStream, LongStream, etc., use the same process as Stream, or add some common statistical methods such as sum, etc., here will not do the detailed description, you can use the time to view!
Parallelstream parallel flows
Java also provides a much faster way of parallel processing. A parallel stream splits a block of content into multiple threads, ForkJoinPool.commonPool(), using the Fork/Join framework. The way serial streams turn into parallel streams is very simple, as follows:
Stream.of().parallel()
Copy the code
The code for a parallel stream operates almost as if it were a serial stream, but because of the public ForkJoinPool it uses. So avoid operation blocking, heavy-duty tasks as much as possible, which will slow down other parts of the system that rely on parallel flows.
The source code
- 👉 source can be accessed here
conclusion
- This chapter mainly explains Stream and gives relevant cases based on Lambda.
- Stream uses include: create a Stream -> intermediate operations -> terminal operations, where intermediate operations can contain multiple operations;
- Parallelstream parallel flows need to consider the thread safety related issues (deadlock, things, etc.) so it is more suitable for no thread safety problem of data processing, the stream is suitable for processing of thread safety, congestion, heavyweight mission;
Related articles
👉 [Learn again series]
The last
- Thank you for your patience to see the end, if you feel this article is helpful, please give aPraise 👍orFocus on ➕;
- Due to my limited technology, the article and code may be wrong, I hope you comment pointed out, very grateful 🙏;
- At the same time, I also welcome you to discuss learning front-end, Java knowledge, volume and progress together.