1. The origin of LongAdder
The LongAdder class is a new atomic operation class in JDK1.8. AtomicLong provides non-blocking atomic operations through the CAS algorithm, which is good compared to synchronizers using blocking algorithms, but the JDK development team was not satisfied with this because AtomicLong performance was unacceptable for frequently concurrent requests.
The incrementAndGet code for AtomicLong is shown below. Although AtomicLong uses the CAS algorithm, few attempts are made to spin lock through an infinite loop after CAS fails, which is why CAS performance is poor under high concurrency. The source code is as follows:
public final long incrementAndGet(a) {
for (;;) {
long current = get();
long next = current + 1;
if (compareAndSet(current, next))
returnnext; }}Copy the code
In the case of high concurrency, N threads operating a variable at the same time will cause a large number of threads CAS failure and then in spin state, resulting in a serious waste of CPU resources and reduced concurrency.
2. Brief introduction of LongAdder and AtomicLong
-
As we know, the volatile keyword is a lightweight lock that solves the problem of multithreaded memory not being visible. For multiple writes, variable synchronization is addressed, but for multiple writes, volatile does not address thread-safety issues.
-
For example, the count++ operation should use the following:
AtomicInteger count = new AtomicInteger();
、count.addAndGet(1);
- If it is JDK8 or above, it is recommended
LongAdder
Object substitution because of its performance ratioAtomicLong
Better (reduce the number of retries for optimistic locks).
LongAdder other application scenarios:
For some requirements for counting statistics in Java projects, if it is JDK8, LongAdder objects are recommended for better performance than AtomicLong (reducing optimistic lock retries)
In most projects and open source components, AtomicLong is still the most used counting statistics. Although Alibaba says so, we still have to decide whether to use LongAdder based on the usage scenario.
Today is mainly to talk about the realization principle of LongAdder, or the old way, through the text step by step to unlock the mysterious veil of LongAdder, through this article you will understand:
- Why does AtomicLong suffer a dramatic performance decline in high concurrency scenarios?
- Why is LongAdder fast?
- Realization principle of LongAdder (Graphic analysis)
- Can AtomicLong be abandoned or replaced?
This article code is based on JDK 1.8, it is recommended to read the article while looking at the source more conducive to digestion!
3, AtomicLong
When we do counting statistics, we usually use AtomicLong. AtomicLong ensures the accuracy of the count in the case of concurrency, and internal CAS is used to solve the problem of concurrency security.
3.1 Implementation principle of AtomicLong
When it comes to thread-safe counting statistics utility classes, there are certainly several Atomic classes under Atomic. AtomicLong is an important atomic class in the JUC package. It can perform atomic operations on long integer data in the case of concurrency to ensure data security in the case of concurrency.
public class AtomicLong extends Number implements java.io.Serializable {
/ / + 1
public final long incrementAndGet(a) {
return unsafe.getAndAddLong(this, valueOffset, 1L) + 1L;
}
/ / 1
public final long decrementAndGet(a) {
return unsafe.getAndAddLong(this, valueOffset, -1L) - 1L; }}Copy the code
In counting, we generally use incrementAndGet() and decrementAndGet() to add and subtract, calling the getAndAddLong() method in the Unsafe class.
Looking out for the broadening.getAndAddLong() method:
public final class Unsafe {
public final long getAndAddLong(Object var1, long var2, long var4) {
long var6;
do {
var6 = this.getLongVolatile(var1, var2);
} while(!this.compareAndSwapLong(var1, var2, var6, var6 + var4));
return var6;
}
public final native boolean compareAndSwapLong(Object var1, long var2, long var4, long var6);
}
Copy the code
The CAS+ spin operation is performed directly to update the value in the AtomicLong, thus ensuring atomic update of the value.
3.2 AtomicLong bottleneck analysis
As shown in the code above, when we use CAS + spin, in a high-concurrency environment, N threads are spinning at the same time, and there will be a large number of failures and continuous spin. At this time, AtomicLong’s spin will become the bottleneck.
As shown in the figure above, AtomicLong performance degrades dramatically in high-concurrency scenarios, as we’ll illustrate later.
Is there a better alternative to the need to count at high concurrency? In JDK8 Doug Lea wrote a new LongAdder to solve this problem. We’ll see how LongAdder is optimized later.
4, LongAdder
4.1 Performance test of LongAdder and AtomicLong
We talk a lot about how LongAdder performs better than AtomicLong. Is that true? Everything still speaks in code:
/** * Atomic and LongAdder time test */
public class AtomicLongAdderTest {
public static void main(String[] args) throws Exception{
testAtomicLongAdder(1.10000000);
testAtomicLongAdder(10.10000000);
testAtomicLongAdder(100.10000000);
}
static void testAtomicLongAdder(int threadCount, int times) throws Exception{
System.out.println("threadCount: " + threadCount + ", times: " + times);
long start = System.currentTimeMillis();
testLongAdder(threadCount, times);
System.out.println("LongAdder time:" + (System.currentTimeMillis() - start) + "ms");
System.out.println("threadCount: " + threadCount + ", times: " + times);
long atomicStart = System.currentTimeMillis();
testAtomicLong(threadCount, times);
System.out.println("AtomicLong time:" + (System.currentTimeMillis() - atomicStart) + "ms");
System.out.println("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --");
}
static void testAtomicLong(int threadCount, int times) throws Exception{
AtomicLong atomicLong = new AtomicLong();
List<Thread> list = Lists.newArrayList();
for (int i = 0; i < threadCount; i++) {
list.add(new Thread(() -> {
for (int j = 0; j < times; j++) { atomicLong.incrementAndGet(); }})); }for (Thread thread : list) {
thread.start();
}
for (Thread thread : list) {
thread.join();
}
System.out.println("AtomicLong value is : " + atomicLong.get());
}
static void testLongAdder(int threadCount, int times) throws Exception{
LongAdder longAdder = new LongAdder();
List<Thread> list = Lists.newArrayList();
for (int i = 0; i < threadCount; i++) {
list.add(new Thread(() -> {
for (int j = 0; j < times; j++) { longAdder.increment(); }})); }for (Thread thread : list) {
thread.start();
}
for (Thread thread : list) {
thread.join();
}
System.out.println("LongAdder value is : "+ longAdder.longValue()); }}Copy the code
Execution Result:
As you can see here, AtomicLong performance deteriorates dramatically as concurrency increases, taking several times longer than LongAdder. As for the reasons, we will continue to look at the future.
4.2 Why is LongAdder so fast
Take a look at the operation schematic of LongAdder:
Given that LongAdder can significantly improve performance in high-concurrency environments, how does it do it?
1. Design ideasLongAdder
Use “section” method of loweringCAS
Frequency of failure
Here, the idea of LongAdder will be briefly described first, and the principle of LongAdder will be detailed later.
As we know, AtomicLong has an internal variable value that holds the actual long value, and all operations are performed on that variable. In other words, in a high concurrency environment, the value variable is actually a hot data, that is, N threads competing for a hot spot.
The basic idea of LongAdder is to disperse hot spots. New operations of value values are scattered into an array, and different threads will hit different slots in the array. Each thread only performs CAS operation on the value in its slot, so that hot spots are scattered and the probability of conflict is much smaller.
LongAdder has a global variable volatile long Base value. When the concurrency is not high, the base value is operated directly through CAS. If CAS fails, CAS is performed on the Cell in the Cell[] array in LongAdder to reduce the probability of failure.
For example, in the current class, base= 10, there are three threads performing the atomic CAS operation **, thread 1 succeeds, thread 2 and thread 3 fail to perform the atomic CAS operation ** on the Cell element in the array [], which is also CAS operation. The value of each Cell in index=1 and index=2 is set to 1.
Sum = 11 + 1 + 1 = 13. The operation of LongAdder is completed. The flow chart is as follows:
To get the real long value, you simply return the values of the variables in each slot. This staging is similar to the staging lock for ConcurrentHashMap in JDK7.
2. Use Contended annotations to eliminate fake shares
In LongAdder’s parent, Striped64, there is a volatile Cell[] cells; Array, which is a power of two, and each Cell is decorated with an @Contended annotation that fills the cache rows and solves the pseudo-sharing problem. Pseudo-sharing will invalidate the cache line and increase the overhead of cache consistency.
@sun.misc.Contended static final class Cell {}Copy the code
Pseudo-sharing refers to the invalidation of the CPU cache when multiple threads simultaneously read and write different variables in the same cache row. Although these variables have no relationship with each other, because they are adjacent to each other in main memory and exist in the same cache line, their mutual overwriting can lead to frequent cache misses, causing performance degradation. Here for pseudo sharing I just mention the concept, will not go into depth to explain, you can look up some information.
The solution to this problem is usually to use direct padding. We just need to make sure that the variables of different threads exist in different cachelines. Padding with extra bytes can do a little bit of this, so that there is no pseudo-sharing problem. For example, there is a similar design in the Disruptor queue design.
In the Striped64 class, we can see Doug Lea’s comment on the Cell to illustrate this point:
The translation in the box reads as follows:
The Cell class is a variant of AtomicLong that adds Padded ([email protected]) to eliminate fake sharing. Cache row padding is cumbersome for most atoms because they are usually irregularly scattered in memory and therefore do not interfere with each other much. However, atomic objects residing in arrays tend to be adjacent to each other, so cache row data is often shared without this precaution (which has a huge negative impact on performance).
3. Lazy evaluation
Only when longValue() is used to obtain the current accumulated value, LongAdder will actually settle the counted data. At the bottom of longValue() method, sum() is called to sum the data of base and Cell array and then return, so as to separate data writing and reading.
AtomicLong uses incrementAndGet() to return a count of type long each time, and each increment is accompanied by a return of data, adding additional overhead.
4.3 Implementation principle of LongAdder
As mentioned earlier, an AtomicLong is multiple threads performing atomic operations on a single hot value. In LongAdder, each thread has its own slot, and each thread generally only performs CAS on that value in its slot.
For example, if three threads increment value by 1, then value = 1 + 1 + 1 = 3
But for LongAdder, there is a base variable inside, a Cell[] array. Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array: Cell[] array:
4.4 ongAdder source code analysis
The previous diagram has been used to analyze the principle of LongAdder high performance, we continue to look at the source code of LongAdder implementation:
public class LongAdder extends Striped64 implements Serializable {
public void increment(a) {
add(1L);
}
public void add(long x) {
Cell[] as; long b, v; int m; Cell a;
if((as = cells) ! =null| |! casBase(b = base, b + x)) {boolean uncontended = true;
if (as == null || (m = as.length - 1) < 0 ||
(a = as[getProbe() & m]) == null| |! (uncontended = a.cas(v = a.value, v + x))) longAccumulate(x,null, uncontended); }}final boolean casBase(long cmp, long val) {
return UNSAFE.compareAndSwapLong(this, BASE, cmp, val); }}Copy the code
Increment (); increment(); increment(); increment();
Variable description:
- As stands for cells reference
- B indicates the base value obtained
- V is the expected value,
- M is the length of the cells array
- A represents the cell hit by the current thread
Condition analysis:
Condition one: as the = = null | | (m = as length – 1) < 0 cells array uninitialized this condition was established. If not, the cells array has been initialized, and the corresponding thread needs to find the element in the Cell array to write the value.
Condition 2: (a = as[getProbe() & m]) == null getProbe() gets the hash value of the current thread, where M stands for cells length -1, and cells length is a power of 2. The reason is also mentioned before. Modulo and array length can be converted into bit-sum operation to improve computing performance.
If the condition is true, it indicates that the current thread hashes the cell at the array position to be empty, and further executes longAccumulate(). If not, the corresponding cell is not empty. In the next step, add the x value to the cell through the CAS operation.
Condition three:! (uncontended = a.value (v = a.value, v + x), mainly see A.value (v = a.value, v + x). If the CAS operation is successful, the if condition is quits. If it fails, the longAccumulate() method continues.
Looking at the core longAccumulate() method, the code is long and will be reviewed step by step.
java.util.concurrent.atomic.Striped64.
:
final void longAccumulate(long x, LongBinaryOperator fn, boolean wasUncontended) {
int h;
if ((h = getProbe()) == 0) {
ThreadLocalRandom.current();
h = getProbe();
wasUncontended = true;
}
boolean collide = false;
for (;;) {
Cell[] as; Cell a; int n; long v;
if((as = cells) ! =null && (n = as.length) > 0) {
if ((a = as[(n - 1) & h]) == null) {
if (cellsBusy == 0) {
Cell r = new Cell(x);
if (cellsBusy == 0 && casCellsBusy()) {
boolean created = false;
try {
Cell[] rs; int m, j;
if((rs = cells) ! =null && (m = rs.length) > 0 && rs[j = (m - 1) & h] == null) {
rs[j] = r;
created = true; }}finally {
cellsBusy = 0;
}
if (created)
break;
continue;
}
}
collide = false;
}
else if(! wasUncontended) wasUncontended =true;
else if (a.cas(v = a.value, ((fn == null)? v + x : fn.applyAsLong(v, x))))break;
else if(n >= NCPU || cells ! = as) collide =false;
else if(! collide) collide =true;
else if (cellsBusy == 0 && casCellsBusy()) {
try {
if (cells == as) {
Cell[] rs = new Cell[n << 1];
for (int i = 0; i < n; ++i) rs[i] = as[i]; cells = rs; }}finally {
cellsBusy = 0;
}
collide = false;
continue;
}
h = advanceProbe(h);
}
else if (cellsBusy == 0 && cells == as && casCellsBusy()) {
boolean init = false;
try {
if (cells == as) {
Cell[] rs = new Cell[2];
rs[h & 1] = new Cell(x);
cells = rs;
init = true; }}finally {
cellsBusy = 0;
}
if (init)
break;
}
else if (casBase(v = base, ((fn == null)? v + x : fn.applyAsLong(v, x))))break; }}Copy the code
It’s a long code, a lot of if and else branches, but otherwise it’s going to be a headache. Here a little bit of analysis, and then combined with the drawing step by step to understand the implementation principle.
We first need to be clear about the preconditions for executing this method, which are or relations, such as conditions one, two, and three above
- The cells array is not initialized
- The cells array has been initialized, but the cell data corresponding to the current thread is empty
- Cells array has been initialized, cell data corresponding to the current thread is empty, and CAS operation +1 failed
The input to the longAccumulate() method:
- The default value for long x is 1
- LongBinaryOperator FN passes NULL by default
- WasUncontended Indicates a contention. If it is false, it indicates a contention. False only after cells is initialized and the current thread CAS contended for modification fails
Then look at the definitions of some variables or methods in Striped64:
- Base: Similar to the global value in AtomicLong. In the absence of contention, data is added directly to base, or when cells expand, data needs to be written to base
- Collide: expansion, false, expansion and will not be true could be expanded.
- CellsBusy: Locks are required for initializing or expanding cells. 0: no locks. 1: locks are held by other threads
- CasCellsBusy (): Modify the value of cellsBusy through the CAS operation. If CAS succeeds, the lock is obtained, and true is returned
- NCPU: indicates the number of cpus on the computer, which will be used for Cell array expansion
- GetProbe (): obtains the hash value of the current thread
- AdvanceProbe (): resets the hash value of the current thread
LongAccumulate () :
private static final long PROBE;
if ((h = getProbe()) == 0) {
ThreadLocalRandom.current();
h = getProbe();
wasUncontended = true;
}
static final int getProbe(a) {
return UNSAFE.getInt(Thread.currentThread(), PROBE);
}
Copy the code
The unsafe.getint () method is used to retrieve the hash value of the current thread, and PROBE gets the threadLocalRandomProbe value at initialization.
Note: Unsafe. GetInt () has three overloaded methods, getInt(Object O, long offset), getInt(Long address), and getIntVolatile(long address), which all retrieve values from specified locations. Only the offset of the first is a relative offset from the object O, and the address of the second is an absolute address offset. If o is null in the first method, offset is also used as the absolute offset. The third is a load read with volatile semantics.
If the hash value h=getProbe() of the current thread is 0, 0 modulo 0 with any number will be fixed to the first position in the array, so this is optimized by using ThreadLocalRandom to recalculate a hash value for the current thread. WasUncontended = true, which means that this is not a race after recalculating the hash of the current thread. The hash value is reset as if it were a new thread, so the race status is set to true.
It can be interpreted as:
[picture archived failure outside the chain, the source station might be hotlinking prevention mechanism, proposed to directly upload picture preserved (img – eeDNufEW – 1619405510325) (C: / Users/dell/Desktop/watermark, type_ZmFuZ3poZW5naGVpdGk, shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L3dlaXhpbl80MzU5MTk4MA==,size_16,color_FFFFFF,t_70)]
Then we execute the for loop. We can break the for loop code apart, counting each if condition as a CASE:
final void longAccumulate(long x, LongBinaryOperator fn, boolean wasUncontended) {
for (;;) {
Cell[] as; Cell a; int n; long v;
if((as = cells) ! =null && (n = as.length) > 0) {}else if (cellsBusy == 0 && cells == as && casCellsBusy()) {
}
else if (casBase(v = base, ((fn == null)? v + x : fn.applyAsLong(v, x)))) } }Copy the code
As shown above, the first if statement represents CASE1, and the next if statement is explained in the form of CASE1.1, followed by else if as CASE2, and the last as CASE3
CASE1 execution conditions:
if((as = cells) ! =null && (n = as.length) > 0) {}Copy the code
If the cells array is not empty and the array length is greater than 0, CASE1 will be executed.
CASE2 implementation conditions and implementation principle:
else if (cellsBusy == 0 && cells == as && casCellsBusy()) {
boolean init = false;
try {
if (cells == as) {
Cell[] rs = new Cell[2];
rs[h & 1] = new Cell(x);
cells = rs;
init = true; }}finally {
cellsBusy = 0;
}
if (init)
break;
}
Copy the code
CASE2 indicates that the cells array has not been initialized, because cells == as means that the current thread is fetching the same cells as before. We can look at this case first, and then come back to the most troublesome implementation logic of CASE1.
CellsBusy = cellsBusy = cellsBusy = cellsBusy = cellsBusy = cellsBusy = cellsBusy = cellsBusy = cellsBusy Cells ==as indicates that the array held by the current thread has not been modified, and casCellsBusy() uses CAS to retrieve the lock. Cell ==as, cell==as, cell==as Draw a picture to illustrate the problem:
Cells ==as double judgment description.png
Cell[] RS = new Cell[2] indicates that the length of the array is 2, rs[h & 1] = new Cell(x) indicates that a new Cell element is created, value is x value, default is 1.
H&1 is similar to the hash bucket index algorithm used in HashMap or ThreadLocal. It is usually hash & (table.len-1), which I won’t explain too much here. Exit the for loop when the execution is complete.
CASE3 execution conditions and implementation principle:
else if (casBase(v = base, ((fn == null)? v + x : fn.applyAsLong(v, x))))break;
Copy the code
Cells is being initialized or has already been initialized. Use the caseBase() method to modify the base value through the CAS operation. If the base value is successfully modified, the loop is broken. The failed thread will go to the branch and modify the base data directly.
CASE1 implementation principle:
After analyzing CASE2 and CASE3, let’s go back to CASE1 and enter CASE1 if the cells array is not empty and the initialization assignment is complete.
Moving on, let’s start with the first decision branch, CASE1.1:
if ((a = as[(n - 1) & h]) == null) {
if (cellsBusy == 0) {
Cell r = new Cell(x);
if (cellsBusy == 0 && casCellsBusy()) {
boolean created = false;
try {
Cell[] rs; int m, j;
if((rs = cells) ! =null && (m = rs.length) > 0 && rs[j = (m - 1) & h] == null) {
rs[j] = r;
created = true; }}finally {
cellsBusy = 0;
}
if (created)
break;
continue;
}
}
collide = false;
}
Copy the code
In this if condition (a = as[(n-1) &h]) == NULL means that the cell data at the subscript position of the array corresponding to the current thread is NULL, indicating that no thread has created a cell object here.
CellsBusy ==0, indicating that the current lock is not occupied. Then the newly created Cell object, and then judge again cellsBusy = = 0, then execute casCellsBusy () attempt by CAS operation modifies cellsBusy = 1, after the success of the lock expansion and modification intentions collide = false;
for (;;) {
if((rs = cells) ! =null && (m = rs.length) > 0 && rs[j = (m - 1) & h] == null) {
rs[j] = r;
created = true;
}
if (created)
break;
continue;
}
Copy the code
The above code determines whether the element pointing to the data position after the current thread hash is empty, if so, it puts the cell data into the array and breaks out of the loop. If it is not empty, the loop continues.
Moving on to the code, CASE1.2:
else if(! wasUncontended) wasUncontended =true;
h = advanceProbe(h);
Copy the code
WasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false wasUncontended =false
Moving on to CASE1.3:
else if (a.cas(v = a.value, ((fn == null)? v + x : fn.applyAsLong(v, x))))break;
Copy the code
Entering CASE1.3 indicates that there is data in the array corresponding to the current thread and the hash value has been reset. In this case, the CAS operation is performed to add x to the value in the current number. The default value of x is 1.
Moving on to CASE1.4:
else if(n >= NCPU || cells ! = as) collide =false;
Copy the code
If the cells array reaches the number of CPU cores, or if cells have been bumped, set the expansion intent to Collide false and try again using the h = advanceProbe(h) method below
As for why there is a judgment on CPU numbers: Cells [threadHash%cells.length] each thread adds values to cells[threadHash%cells.length]. This binds the thread to a Cell in cells. If we exceed the number of cpus, we don’t expand because the number of cpus represents the processing power of the machine. When we exceed the number of cpus, the extra cells elements don’t matter much.
Moving on to CASE1.5:
else if(! collide) collide =true;
Copy the code
If the mind collide is false, set it to true, and then recalculate the hash value of the current thread to keep iterating. In CASE1.4, if the current array is larger than the number of CPU cores, set it to collide=false again. The meaning here is to ensure that the capacity expansion of CASE1.6 will not continue after the expansion intention is false.
Moving on to the CASE1.6 branch:
else if (cellsBusy == 0 && casCellsBusy()) {
try {
if (cells == as) {
Cell[] rs = new Cell[n << 1];
for (int i = 0; i < n; ++i) rs[i] = as[i]; cells = rs; }}finally {
cellsBusy = 0;
}
collide = false;
continue;
}
Copy the code
If CAS is successful, it means that the lock has been successfully acquired. Continue to proceed to determine that the current array of cells is the same as the most recently assigned as, which means that it has not been expanded by other threads. Then expand the array. Capacity expansion is twice the previous capacity, which is operated by moving 1 bit to the left.
Cell[] rs = new Cell[n << 1];
Copy the code
After capacity expansion, copy the elements of the previous array to the new array, release the lock set cellsBusy = 0, set capacity expansion status, and continue the loop.
At this point, we have analyzed all the logic of longAccumulate(). There are quite a few branches of longAccumulate. If we look at them carefully, they are quite clear.
Let’s take a few more examples of thread execution where the scenario is not fully covered. You can use this model to simulate the scenario and analyze the code flow:
If you have any questions, please point them out in time. I will correct them as soon as possible. Thank you very much!
4.5 LongAdder’s sum method
When we finally get the counter value, we can use the longadder.longValue () method, which inside uses the sum method to summarize the data.
java.util.concurrent.atomic.LongAdder.sum()
:
public long sum(a) {
Cell[] as = cells; Cell a;
long sum = base;
if(as ! =null) {
for (int i = 0; i < as.length; ++i) {
if((a = as[i]) ! =null) sum += a.value; }}return sum;
}
Copy the code
The implementation is simple, base +To traverse thecells
The values in the array, and then add up.
4.6 Can AtomicLong be deprecated?
LongAdder seems to outperform AtomicLong across the board, and the Alibaba development manual recommends using LongAdder objects that perform better than AtomicLong (reducing the number of optimistic lock retries), but can we really get rid of LongAdder?
Of course not, we need to see the scenario to use it, if the system is not too concurrency, AtomicLong may be better, and the memory requirements are smaller.
After reading the sum() method, we can know that if there are concurrent updates of LongAdder during the statistics, the statistical data may have errors.
LongAdder is more suitable for scenarios with high concurrent statistics.
5, summary
The core idea of LongAdder is to use space to exchange time and disperse hotspot values into a Cell list to accept concurrent CAS to improve performance.
The principle and implementation of LongAdder are very simple, but the idea of its design is worth our taste and learning.