This article is participating in Python Theme Month. See the link for details
1 Python function argument passing
Look at two examples:
a = 1
def fun(a) :
a = 2
fun(a)
print(a) # 1
Copy the code
a = []
def fun(a) :
a.append(1)
fun(a)
print a # [1]
Copy the code
In Python, everything is an object. Strictly speaking, we can’t say pass by value or by reference. We should say pass immutable objects and pass mutable objects. All variables are understood to be “references” to an object in memory. The memory address referencing a can be easily understood by looking at the id:
a = 1
def fun(a) :
print ("func_in".id(a) ) # func_in 41322472
a = 2
print ("re-point".id(a), id(2)) # re-point 41322448 41322448
print ("func_out".id(a), id(1)) # func_out 41322472 41322472
fun(a)
print(a) # 1
Copy the code
As you can see, after executing a = 2, the memory address stored in the a reference changes from the original memory address of object 1 to the memory address of entity object 2. The second example a reference holds the memory value unchanged:
a = []
def fun(a) :
print ("func_in".id(a)) # func_in 53629256
a.append(1)
print ("func_out".id(a)) # func_out 53629256
fun(a)
print (a) # [1]
Copy the code
Mutable and immutable objects:
- Immutable type: assigning a=5 and then a=10 generates a new int object 10, and a points to 10, and 5 is discarded.
- Variable type: assigning la=[1,2,3,4] and la2=5 changes the third element of list la.
In Python, strings, tuples, and numbers are immutable objects, while lists,dict, and so on are modifiable objects. Passing arguments to Python functions:
- Immutable types: c++ like value passing, such as numbers, strings, tuples. Fun (a), for example, passes only the value of A and does not affect the A object itself. For example, if you change the value of A inside fun (a), you’re just changing another copied object and not affecting A itself.
- Mutable types: c++ like reference passing, such as lists, dictionaries. If fun (la) is changed, la outside fun will also be affected
2 Metaclass in Python
type
There are two basic concepts in OOP: classes and objects:
- Class is a piece of code that describes how to create an object that describes a collection of objects that have the same properties and methods. It defines the properties and methods that are common to every object in the collection. Okay
- An object is an Instance of a class.
In Python, a class is an object, and as long as you use the keyword class, Python executes it and creates an object. This object (class) itself can be created (instance), which is why it is an object of class. 1 Therefore: it can be assigned to a variable; You can copy it; You can add attributes to it; It can be as the parameters of the function reference: www.cnblogs.com/tkqasn/p/65… We can create classes dynamically, using the type function.
Type (class name, tuple of the parent class (which can be empty in case of inheritance), dictionary of attributes (name and value)
1, build class Foo# Build object code
class Foo(object) :
bar = True
# Build with type
Foo = type('Foo', (), {'bar':True})
2.Inherit the Foo class# Build object code:
class FooChild(Foo) :
pass
# Build with type
FooChild = type('FooChild', (Foo,),{})
print FooChild
print FooChild.bar The # bar property is inherited from Foo
# output: True
3.Add methods to the Foochild classdef echo_bar(self) :
print self.bar
FooChild = type('FooChild', (Foo,), {'echo_bar': echo_bar})
hasattr(Foo, 'echo_bar')
# output: False
hasattr(FooChild, 'echo_bar')
# output: True
my_foo = FooChild()
my_foo.echo_bar()
# output: True
Copy the code
What is a metaclass
Classes in Python are also objects. Metaclasses are used to create these classes (objects). Metaclasses are classes of classes. You can think of them as:
MyClass = MetaClass() # metaclass creates class
MyObject = MyClass() Class create instanceIn fact MyClass is passedtype() to create MyClass, which istypeAn instance of the () class; MyClass is also a class itself, and you can create instances of it, in this case MyObjectCopy the code
The function type is actually a metaclass. Type is the metaclass Python uses behind the scenes to create all classes. Now you’re wondering why type is all lowercase and not type? Well, I guess this is to be consistent with STR, which is the class used to create string objects, and int, which is the class used to create integer objects. Type is the class that creates the class object. You can see this by examining the __class__ attribute. Everything in Python, and notice, I mean everything — it’s all objects. This includes integers, strings, functions, and classes. They are all objects, and they are all created from a class. So, metaclasses are things that create objects like classes, and Type is Python’s built-in metaclasses, and of course, you can create your own metaclasses.
Metaclass property
You can add a __metaclass__ attribute to aclass when you write it. Defining __metaclass__ defines the metaclass of that class.
class Foo(metaclass=something) : #py3__metaclass__ = something...Copy the code
For example, when we write the following code:
class Foo(Bar) :
pass
Copy the code
At the time the class is defined, it is not generated in memory until it is called. Python does the following: 1) Does Foo have a __metaclass__ attribute? If so, Python creates aclass object named Foo in memory via __metaclass__. 2) If Python does not find the __metaclass__ attribute, it continues to look for the __metaclass__ attribute in the parent class and tries to do the same thing as before. 3) If Python can’t find __metaclass__ in any parent class, it looks for __metaclass__ in the module hierarchy and tries to do the same. 4) If __metaclass__ is still not found,Python creates the class object using the built-in Type.
The question now is, what code can you put in __metaclass__? The answer is: something that can create a class. So what can be used to create a class? Type, or anything that uses type or subclasses type. Like using functions as metaclasses, or classes.
The metaclass will automatically pass the arguments you normally pass to 'type' as its own arguments
def upper_attr(future_class_name, future_class_parents, future_class_attr) :
Returns a class object that capitalizes the properties.
# Select all attributes that do not begin with '__'
attrs = ((name, value) for name, value in future_class_attr.items() if not name.startswith('__'))
# Uppercase them
uppercase_attr = dict((name.upper(), value) for name, value in attrs)
# create class object by 'type'
return type(future_class_name, future_class_parents, uppercase_attr)# return a class
class Foo(metaclass = upper_attr) :
bar = 'bip'
print hasattr(Foo, 'bar')
# output: False
print hasattr(Foo, 'BAR')
# output: True
f = Foo()
print f.BAR
# output: 'BJP'
Copy the code
Use class as a metaclass
Remember that ‘type’ is really a class, just like ‘STR’ and ‘int’. So, you can inherit from type that __new__ is a special method that is called before __init__, __new__ is the method that creates and returns an object, __new_() is a class method, and __init__ is just used to initialize the argument passed to the object, It is a method executed after the object is created. You rarely use __new__ unless you want to control object creation. In this case, the object being created is the class, and we want to be able to customize it, so we’ll rewrite __new__ here and you can do something in __init__ if you want. There are other advanced uses that involve overwriting the __call__ special method, but we don’t use them here. We can discuss this use separately below
class UpperAttrMetaClass(type) :
def __new__(upperattr_metaclass, future_class_name, future_class_parents, future_class_attr) :
attrs = ((name, value) for name, value in future_class_attr.items() if not name.startswith('__'))
uppercase_attr = dict((name.upper(), value) for name, value in attrs)
return type(future_class_name, future_class_parents, uppercase_attr)# return an object that is also a class
Copy the code
3 @ @ classmethod and staticmethod
There are three methods in Python: staticmethod, classmethod, and instance method.
def foo(x) :
print "executing foo(%s)"%(x)
class A(object) :
def foo(self,x) :
print "executing foo(%s,%s)"%(self,x)
@classmethod
def class_foo(cls,x) :
print "executing class_foo(%s,%s)"%(cls,x)
@staticmethod
def static_foo(x) :
print "executing static_foo(%s)"%x
a=A()
Copy the code
Self and CLS are bindings to a class or an instance. For ordinary functions we can call foo(x) like this. This function is the most common, and its work is independent of anything (class, instance). For instance methods, we know that every time we define a method in our class we need to bind this instance, which is foo(self, x), why do we do that? Since instance methods can’t be called without instance methods, we need to pass the instance itself to the function, which is called as a.foo(x)(foo(a, x)). Class method, except that it passes a class instead of an instance, a.class_foo (x). Note that self and CLS can be substituted for other arguments, but python’s convention is that they should not be changed. Static methods are exactly the same as normal methods. There is no need to bind to anyone, the only difference is that they are called using either A.atic_foo (x) or A.atic_foo (x).
\ | Instance methods | Class method | A static method |
---|---|---|---|
a = A() | a.foo(x) | a.class_foo(x) | a.static_foo(x) |
A | Do not use | A.class_foo(x) | A.static_foo(x) |
Class 4 variables and instance variables
Class variables:
Are values that can be shared between all instances of a class (that is, they are not individually assigned to each instance). In the example below, num_of_instance is the class variable used to track how many instances of Test exist.
Instance variables:
After instantiation, each instance has a separate variable.
class Test(object) :
num_of_instance = 0
def __init__(self, name) :
self.name = name
Test.num_of_instance += 1
if __name__ == '__main__':
print Test.num_of_instance # 0
t1 = Test('jack')
print Test.num_of_instance # 1
t2 = Test('lucy')
print t1.name , t1.num_of_instance # jack 2
print t2.name , t2.num_of_instance # lucy 2
Copy the code
Supplementary examples
class Person:
name="aaa"
p1=Person()
p2=Person()
p1.name="bbb"
print p1.name # bbb
print p2.name # aaa
print Person.name # aaa
Copy the code
P1. name=” BBB “; p1.name=” aaa”; self.name =”aaa” The Person class variable name is referenced.
Consider the following example:
class Person:
name=[]
p1=Person()
p2=Person()
p1.name.append(1)
print p1.name # [1]
print p2.name # [1]
print Person.name # [1]
Copy the code
5 the Python introspection
This is also a tough feature of Python.
Introspection is what a program written in an object-oriented language can know about the type of an object at run time. The short sentence is that the runtime can get the type of the object. Such as the type (), dir (), getattr (), hasattr (), isinstance ().
a = [1.2.3]
b = {'a':1.'b':2.'c':3}
c = True
print type(a),type(b),type(c) # <type 'list'> <type 'dict'> <type 'bool'>
print isinstance(a,list) # True
Copy the code
6 dictionary derivation
You may have seen the list derivation before, but not the dictionary derivation, which was added in 2.7:
d = {key: value for (key, value) in iterable}
Copy the code
7 Single and double underscores in Python
>>> class MyClass() :
. def __init__(self) :
. self.__superprivate = "Hello"
. self._semiprivate = ", world!".>>> mc = MyClass()
>>> print mc.__superprivate
Traceback (most recent call last):
File "<stdin>", line 1.in <module>
AttributeError: myClass instance has no attribute '__superprivate'
>>> print mc._semiprivate
, world!
>>> print mc.__dict__
{'_MyClass__superprivate': 'Hello'.'_semiprivate': ', world! '}
Copy the code
__foo__ : a convention, Python internal name, used to distinguish other user-defined name, in case of conflict, is such as __init__ (), __del__ (), __call__ () these special methods
_foo: a convention to specify that a variable is private. A way for programmers to specify private variables. Cannot be imported with from Module import *, otherwise as public;
__foo: This has real meaning: the parser replaces this name with _classname__foo to distinguish it from other classes that have the same name. It cannot be accessed directly as a public member, but by means of object name. _classname __xxx.
8 String formatting :% and. Format
Format seems more convenient in many ways. The most annoying thing about % is that it can’t pass both a variable and a tuple. You might think the following code wouldn’t be a problem:
"hi there %s" % name
Copy the code
However, if name happens to be (1,2,3), it will throw a TypeError exception. To make sure it’s always true, you have to do this:
"Hi there %s" % (name,) # Provides a single element array instead of an argumentCopy the code
But a little ugly.. Format does not have these problems. The same is true of the second question you gave. Format is much better.
>>>"{} {}".format("hello"."world") # do not set the specified location, in default order
'hello world'
>>> "{0} {1}".format("hello"."world") # set the specified position
'hello world'
>>> "{1} {0} {1}".format("hello"."world") # set the specified position
'world hello world'
print("Site name: {name}, address {url}".format(name="Rookie Tutorial", url="www.runoob.com"))
Set parameters through the dictionary
site = {"name": "Rookie Tutorial"."url": "www.runoob.com"}
print("Site name: {name}, address {url}".format(**site))
Set parameters by list index
my_list = ['Rookie Tutorial'.'www.runoob.com']
print("Site name: {0[0]}, address {0[1]}".format(my_list)) # "0" is required
Copy the code
Iterators and generators
This is the Chinese version: taizilongxu. Gitbooks. IO/stackoverfl…
Q: Does the data structure change after changing [] to () in the list generator? Answer: Yes, from a list to a generator
>>> L = [x*x for x in range(10)]
>>> L
[0.1.4.9.16.25.36.49.64.81]
>>> g = (x*x for x in range(10))
>>> g
<generator object <genexpr> at 0x0000028F8B774200>
Copy the code
With the list generator, you can create a list directly. However, due to memory limitations, list capacity is definitely limited. Moreover, creating a list of millions of elements not only takes up a lot of memory, for example, we only need to access the first few elements, most of the space taken up by the following elements is wasted. Therefore, there is no need to create a complete list (saving a lot of memory). In Python, we can use a generator: looping, calculating mechanism – >generator
10 *args
and **kwargs
*args and **kwargs are used just for convenience and are not mandatory.
You can use *args when you are not sure how many arguments to pass in your function. For example, it can pass any number of arguments:
>>> def print_everything(*args) :
for count, thing in enumerate(args):
. print '{0}. {1}'.format(count, thing)
...
>>> print_everything('apple'.'banana'.'cabbage')
0. apple
1. banana
2. cabbage
Copy the code
Similarly,**kwargs allows you to use undefined parameter names:
>>> def table_things(**kwargs) :
. for name, value in kwargs.items():
. print '{0} = {1}'.format(name, value)
...
>>> table_things(apple = 'fruit', cabbage = 'vegetable')
cabbage = vegetable
apple = fruit
Copy the code
You can mix it up. Named parameters first get their values and then all other parameters are passed to *args and **kwargs. Named parameters are at the top of the list. Such as:
def table_things(titlestring, **kwargs)
Copy the code
*args and **kwargs can be in the function definition together, but *args must precede **kwargs.
You can also use * and ** syntax when calling functions. Such as:
>>> def print_three_things(a, b, c) :
. print 'a = {0}, b = {1}, c = {2}'.format(a,b,c)
...
>>> mylist = ['aardvark'.'baboon'.'cat']
>>> print_three_things(*mylist)
a = aardvark, b = baboon, c = cat
Copy the code
As you can see, it can pass each item of a list (or tuple) and unpack them. Notice that it has to match their arguments in the function. Of course, you can also use * in function definitions or function calls.
Faceted programming AOP and decorators
AOP in short, the idea of dynamically cutting code into a class’s designated methods and locations at run time, compile time, and class and method load is faceted programming.
We call the snippet of code that cuts into the specified methods of a specified class an aspect, and the classes and methods that it cuts into a pointcut. With AOP, we can change the behavior of an object by taking the code shared by several classes and pulling it into a slice until we need to cut into it.
Decorators are a well-known design pattern that is often used in scenarios with faceted requirements, such as insert logging, performance testing, and transaction processing. Decorators are a great way to solve these problems. With decorators, we can isolate and reuse the same code in a large number of functions that have nothing to do with the function itself.
In a nutshell, the purpose of decorators is to add extra functionality to an existing object. Author: xiao-tian zhang a link: www.jianshu.com/p/4c588eec1… The copyright of the book belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.
Adornment is used method: www.runoob.com/w3cnote/pyt…
def a_new_decorator(a_func) :
def wrapTheFunction() :
print("A_func (before)")
a_func()
print("A_func ()")
return wrapTheFunction
def a_func() :
print("a_func()")
a_func()
# outputs: a_func()
a_function_requiring_decoration = a_new_decorator(a_func)
a_function_requiring_decoration()
# outputs:
# a_func () before
# a_func()
After # a_func ()
Copy the code
That’s exactly what decorators do in Python! They encapsulate a function and modify its behavior in one way or another. Now you might be wondering, we’re not using the @ sign in our code, right? That’s just a short way to generate a decorated function. Here’s how we run the previous code using @ :
@a_new_decorator
def b_func() :
"""Hey you! Decorate me!" ""
print("b_func()")
b_func()
#outputs:
# a_func () before
# b_func()
After # a_func ()
Copy the code
Hopefully, you now have a basic understanding of how Python decorators work. There is a problem if we run the following code:
print(a_function_requiring_decoration.__name__)
# Output: wrapTheFunction
Copy the code
That’s not what we want! Ouput output should be “a_function_requiring_decoration”. The function here has been replaced by warpTheFunction. It overrides the name of our function and the comment document (docString). Fortunately, Python provides a simple function to solve this problem, which is functools.wraps. Let’s modify the last example to use functools.wraps:
from functools import wraps
def a_new_decorator(a_func) :
@wraps(a_func)
def wrapTheFunction() :
print("I am doing some boring work before executing a_func()")
a_func()
print("I am doing some boring work after executing a_func()")
return wrapTheFunction
@a_new_decorator
def a_function_requiring_decoration() :
"""Hey yo! Decorate me!" ""
print("I am the function which needs some decoration to "
"remove my foul smell")
print(a_function_requiring_decoration.__name__)
# Output: a_function_requiring_decoration
Copy the code
It’s better now. Let’s look at some common scenarios for decorators.
Blueprint specification:
from functools import wraps
def decorator_name(f) :
@wraps(f)
def decorated(*args, **kwargs) :
if not can_run:
return "Function will not run"
return f(*args, **kwargs)
return decorated
@decorator_name
def func() :
return("Function is running")
can_run = True
print(func())
# Output: Function is running
can_run = False
print(func())
# Output: Function will not run
Copy the code
12 Duck Types
“A bird can be called a duck when it is seen to walk like a duck, swim like a duck and quack like a duck.”
We don’t care what type of object it is, whether it’s a duck or not, we just care about the behavior.
In Python, for example, there are a lot of file-like things like StringIO,GzipFile,socket. They have many of the same methods, and we use them as files.
In the list.extend() method, we don’t care if its arguments are lists, as long as it’s iterable, so its arguments can be list/tuple/dict/ string/generator, etc.
The duck type is often used in dynamic languages and is flexible enough that Python doesn’t have to specialize in a bunch of design patterns like Java does.
13 Overloaded in Python
Quotations from zhihu: www.zhihu.com/question/20…
Function overloading is primarily intended to solve two problems.
- Variable parameter type.
- Number of variable parameters.
In addition, a basic design principle is to use function overloading only when two functions do exactly the same thing except for the type and number of arguments. If the functions do not do the same thing, overloading should not be used but a function with a different name should be used.
Ok, so what does Python do with case 1, where the function is the same but the argument type is different? The answer is no, because Python can accept arguments of any type, and if a function does the same thing, then different argument types are probably the same code in Python, and there’s no need to make two different functions.
So what does Python do with case 2, where the function has the same function but a different number of arguments? You know, the answer is the default parameter. Setting the missing parameters as defaults solves the problem. Since you assume the functions are the same, those missing arguments will be needed after all.
Well, given that cases 1 and 2 have solutions, Python naturally does not need function overloading.
New and old classes
This article very good introduces the features of new classes: www.cnblogs.com/btchenguang… The new classes came as early as 2.2, so the old classes are a complete compatibility issue, and Python3 is full of new classes. There is an MRO problem to learn about (the new class inheritance is based on the C3 algorithm, the old class is depth-first), and a lot of it is covered in Python Core Programming.
MRO(Method Resolution Order) for the following multiple inheritance relationship:B = A(), what happens when b.a is called? In the classical object model, the search chain of methods and attributes is searched in a left-to-right, depth-first fashion. So when instance B of A wants to use attribute A, its search order is :A->B->D->C->A. This will ignore the definition A of class C and find attribute A of base class D first, which is A bug. This problem is fixed in the new class. So the search order is A->B->C->D, which can correctly return the attribute A of class C. Classic class:The new class:This order is implemented through the special read-only attribute __mro__ in the new class. The type is a tuple that holds parsing order information. Can only be used by class, not by instance.The order is also related to the order of the parent classes in parentheses when inheriting:
Follow the search order of the classic classDepth from left to right is preferred
The rules in accessd.foo1()
Class D is not available when… Foo1 () = foo1(); foo1() = foo1(); foo1() = foo1()
15 __new__
and__init__
The difference between
This __new__ is really rare, first do understand.
__new__
Is a static method, while__init__
Is an instance method.__new__
Method returns a created instance, and__init__
Nothing returns.- Only in the
__new__
Return an instance of CLS__init__
Can be called. - Called when a new instance is created
__new__
To initialize an instance__init__
.
Ps: __metaclass__ is used when creating aclass. So we can use __metaclass__,__new__, and __init__ to make small changes to class creation, instance creation, and instance initialization, respectively.
16 Singleton mode
Singleton pattern is a common software design pattern. It contains only one special class called a singleton in its core structure. The singleton mode can ensure that a class in the system has only one instance and the instance is easy to access, so as to facilitate the control of the number of instances and save system resources. If you want to have only one object of a class in the system, the singleton pattern is the best solution.
__new__() is called before __init__() to generate instance objects. The singleton pattern of the design pattern can be realized by taking advantage of the properties of this method and class. Singletons are designed to create unique objects that can only be instantiated
This is definitely a regular test. There are one or two things you should definitely remember when the interviewer was supposed to write it by hand.
1 the use of__new__
methods
class Singleton(object) :
def __new__(cls, *args, **kw) :
if not hasattr(cls, '_instance'):
orig = super(Singleton, cls)
cls._instance = orig.__new__(cls, *args, **kw)
return cls._instance
class MyClass(Singleton) :
a = 1
Copy the code
2 Share Attributes
Create instances by pointing __dict__ to the same dictionary for all instances so that they have the same properties and methods.
class Borg(object) :
_state = {}
def __new__(cls, *args, **kw) :
ob = super(Borg, cls).__new__(cls, *args, **kw)
ob.__dict__ = cls._state
return ob
class MyClass2(Borg) :
a = 1
Copy the code
3 Decorator version
def singleton(cls) :
instances = {}
def getinstance(*args, **kw) :
if cls not in instances:
instances[cls] = cls(*args, **kw)
return instances[cls]
return getinstance
@singleton
class MyClass:.Copy the code
4 the import method
Modules as Python are natural singletons
# mysingleton.py
class My_Singleton(object) :
def foo(self) :
pass
my_singleton = My_Singleton()
# to use
from mysingleton import my_singleton
my_singleton.foo()
Copy the code
Singleton mode bole online detailed explanation
17 Scope in Python
In Python, the scope of a variable is always determined by where it is assigned in the code.
When Python encounters a variable, it searches for it in this order:
If you want additional functions on the current scope. If you want additional functions on the current scope. If you want additional functions on the current scope, if you want additional functions on the current scope, you want additional functions on the current scope.
There are four types of scope in Python: L: local, where a local scope is a variable defined in a function; E: Enclosing, the local scope of a nested parent function, that is, the local scope of a parent function that contains this function, but is not global (closures are common); G: globa, global variables, which are defined at the module level; B: Built-in, system built-in module variables, such as int, bytearray, etc.
If you want additional recommendations on this, local and additional recommendations are relative
18 Lock the GIL thread globally
The Global Interpreter Lock, Python’s thread-safe restriction on running separate threads, basically means that only one thread is executing any Python program at any time, no matter how many processors there are. For IO intensive tasks, Python’s multithreading works, but for CPU intensive tasks, Python’s multithreading has little or no advantage and can be slowed down by competing for resources.
See Python’s most difficult problems The relationship between Cpu cores and thread processes Python multi-process programming
The solution is multiple processes and the following coroutines (coroutines are also single CPU, but reduce switching costs and improve performance).
19 coroutines
To put it simply, coroutine is the upgraded version of process and thread. Both process and thread are faced with the switching problem of kernel state and user state, which costs a lot of switching time. And coroutine is the time when users control switching, and no longer need to fall into the kernel state of the system.
The most common yield in Python is the idea of coroutines! Look at question number nine.
Python coroutines
20 closure
Closures are an important syntactic structure for functional programming. Closures are also a structure for organizing code, which also improves code reusability. When an embedded function refers to a variable that is scoped outside of it, we get a closure. To summarize, creating a closure must satisfy the following requirements:
- There must be an embedded function
- An embedded function must reference a variable in an external function
- The return value of an external function must be an embedded function
def outer_func() :
loc_list = []
def inner_func(name) :
loc_list.append(len(loc_list) + 1)
print '%s loc_list = %s' %(name, loc_list)
return inner_func
Copy the code
When a function is completed, the instance is not destroyed, but remains in memory. This functionality is similar to class variables in classes, but migrated to functions. A closure is like a hollow ball. You know the outside and the inside, but you don’t know what the middle is like.
21 lambda functions
Author: tao wu links: www.zhihu.com/question/20… The copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please indicate the source.
Anonymous functions that have practical uses. Here’s a common Python example: square each element in a list:
map( lambda x: x*x, [y for y in range(10)])Copy the code
This is a better way to write it
def sq(x) :
return x * x
map(sq, [y for y in range(10)])
Copy the code
The latter defines one more function, especially if the function is used only once. And the first one is actually easier to read, because it’s pretty straightforward what the function that maps to the list does. If you look closely at your code, you’ll see that this scenario is quite common: you really only need a function somewhere that does one thing, even if it doesn’t matter what it’s called. Lambda expressions can be used to do this.
Further, an anonymous function is essentially a function, and what it abstracts out is a set of operations. analogy
a = [1.2.3]
Copy the code
and
f = lambda x : x + 1
Copy the code
You can see that the thing on the right of the equals sign can exist without the thing on the left, and the name on the left of the equals sign is just an identifier for the entity on the right. If you can get used to [1, 2, 3] standing alone, then lambda x: x + 1 can stand alone, in the sense of adding one to a number itself.
22 Python Functional programming
This needs to be understood in due course, since functional programming is also referenced in Python.
Recommended: Cool shell
Functional programming in Python supports:
The filter function functions as a filter. Call a Boolean function bool_func to iterate over each seQ element; Returns a sequence of elements that set BOOL_SEq to true.
>>>a = [1.2.3.4.5.6.7]
>>>b = filter(lambda x: x > 5, a)
>>>print b
>>>[6.7]
Copy the code
The map function executes a function on each item in a sequence, multiplying each item in a sequence by 2:
>>> a = map(lambda x:x*2[1.2.3])
>>> list(a)
[2.4.6]
Copy the code
The reduce function is called iteratively for each item in a sequence. Here is the factorial of 3:
>>> reduce(lambda x,y:x*y,range(1.4))
6
Copy the code
23 Copy in Python
The difference between a reference and copy() and deepCopy ()
import copy
a = [1.2.3.4['a'.'b']] # primitive object
b = a # assign, pass a reference to the object
c = copy.copy(a) Object copy, shallow copy
d = copy.deepcopy(a) Object copy, deep copy
a.append(5) Change object A
a[4].append('c') # modify the ['a', 'b'] array object in object A
print 'a = ', a
print 'b = ', b
print 'c = ', c
print 'd = ', d Output result: a = [1.2.3.4['a'.'b'.'c'].5]
b = [1.2.3.4['a'.'b'.'c'].5]
c = [1.2.3.4['a'.'b'.'c']]
d = [1.2.3.4['a'.'b']]
Copy the code
ShallowCopy simply adds a pointer to an existing memory address to a reference in an object,
A deepCopy is when a reference to an object adds a pointer to a new memory, and the added pointer points to the new memory
24 Python garbage collection mechanism
Python GC mainly uses reference counting to track and collect garbage. On the basis of reference counting, we solve the problem of circular references that may be generated by container objects through “mark and sweep”, and improve garbage collection efficiency through “Generation collection” by space for time method.
1 Reference Count
PyObject is mandatory for every object, where ob_RefCNt is used as a reference count. When an object has a new reference, its OB_refCNt increases, and when the object referencing it is deleted, its OB_refcnt decreases. When the reference count is 0, the object ends its life.
Advantages:
- simple
- The real time
Disadvantages:
- Maintaining reference counts consumes resources
- A circular reference
2 mark-clear mechanism
The basic idea is to allocate on demand. When there is no free memory, start from registers and references on the program stack, traverse the graph with objects as nodes and references as edges, mark all accessible objects, and then clean the memory space and release all unmarked objects.
Generation technology
The whole idea of generational collection is to divide all memory blocks in the system into different collections according to their lifetime, and each collection becomes a “generation”. The garbage collection frequency decreases as the lifetime of “generation” increases, and the lifetime is usually measured by several garbage collections.
Python defines three generations of object collections by default. The larger the number of indexes, the longer the object lives.
For example, when some memory block M survives three garbage collection cleanings, we partition it into collection A, and the newly allocated memory into collection B. When garbage collection starts, most of the time it only collects garbage from collection B, while collection from collection A takes place at A fairly long interval, which makes the garbage collection mechanism less memory to process and more efficient. In this process, some memory blocks in set B will be transferred to set A due to their long lifetime. Of course, there are actually some garbage in set A, whose collection will be delayed due to this generational mechanism.
Python is 26
Is is the contrast address,== is the pair ratio
27 the read, readline and readlines
- Read Reads the entire file
- Readline reads the next line, using the generator method
- Readlines reads the entire file into an iterator for us to traverse
28 super().init_ ()
At this point, A has successfully inherited the attributes of the superclass, so it is obvious what super().init() does, which is to execute the constructor of the superclass so that we can call the attributes of the superclass.
29 range and xrange
Both are used in loops, and Xrange memory performance is better. Xrange is used in the same way as range, that is, xrange([start,] stop[, step]) depending on the range specified by start and stop and the step set by step, except that xrange does not generate a sequence, but acts as a generator. That is, he generates one data and takes one data.
Therefore, relatively speaking, Xrange is much better than Range performance optimization, because it does not need to open up a large chunk of memory, especially when the data volume is relatively large.
Note: 1, xrange and range are basically used in the loop. 2. When you need to output a list, you must use range.
30 What is PEP?
PEP stands for Python Enhancement Proposal. It is a set of rules that specify how to format Python code for maximum readability.
31 Python memory management mechanism
Memory management in Python is managed by the Python private heap space. All Python objects and data structures reside in a private heap (CPython). The programmer has no access to this private heap. The Python interpreter takes care of this.
Memory pool
Python’s memory mechanism is pyramidal. The -1 and -2 layers are mainly operated by the operating system. The 0 layer is operated by malloc, free and other memory allocation and release functions in C. Is the memory management interface provided by the operating system, and Python cannot interfere with the behavior of this layer. The remaining three layers are implemented and maintained in Python. At layer 1, Python is wrapped around the memory management interface of the Layer 0 operating system. A family of functions prefixed with PyMem_. At level 2, memory management for some common objects in Python, such as integer objects, string objects, is the memory pool. At level 3, the object buffer pool mechanism is the main one. What really works in Python is the memory management mechanism at layer 2. This section includes GC. In terms of freeing memory, Python calls an object’s destructor when its reference count becomes zero. Calling the destructor does not necessarily mean that free will eventually be called to free up memory, and if so, frequently requesting and freeing memory will make Python less efficient. Therefore, the memory pool mechanism is also used during the destruction. The memory applied from the memory pool is returned to the memory pool to avoid frequent application and release actions. PyObject_Malloc allocates memory in the memory pool when the number of bytes allocated is less than 256. PyObject_Malloc behavior degrades to malloc behavior when requested memory is greater than 256 bytes. Of course, by modifying the Python source code, we can change this default value, thus changing Python’s default memory management behavior.
Object buffer pool
A buffer pool is essentially the portion of memory that is created when the Python interpreter starts to store frequently used objects
Small integer object pool
Integers are so widely used in programs that Python optimizes speed by using a pool of small integer objects to avoid frequent requisitions and destruction of memory for integers. Python defines small integers as [-5, 256]. These integer objects are pre-established and not garbage collected. In a Python program, no matter where the integer is in LEGB, all integers in that range use the same object.
Intern mechanism
Strings without Spaces are created only once because they are more likely to be reused. Strings with Spaces are created multiple times, but strings larger than 20 characters are created multiple times.
a="helloworld"
b="helloworld"
a is b #True
Copy the code
Large integer object pool
Anything beyond the range of small integers is a large integer, and a new object is created each time. But large integers in a code block are the same object. The terminal is executed once, so the large integers are recreated each time. In PyCharm, all the code is loaded in memory each time and belongs to a whole, so there is a pool of large integer objects, that is, the large integers in a code block are the same object. C1 and d1 are in one code block, while C1. b and c2.b have their own code blocks, so they are not equal.
c1 = 1000
d1 = 1000
print(c1 is d1) # True
class C1(object) :
a = 100
b = 100
c = 1000
d = 1000
class C2(object) :
a = 100
b = 1000
print(C1.a is C1.b) # True
print(C1.a is C2.a) # True
print(C1.c is C1.d) # True
print(C1.b is C2.b) # False
Copy the code
The variable object
It is not possible to use a buffer pool for mutable objects, since mutable objects are always mutable and caching is meaningless
32 __name__ and __main__
__name__ is the name of the current module, and __main__ is the name of the module when it is run directly. This means that the following code blocks will be run when the module is run directly, but not when the module is imported.
We know that when we import code from module A into module B, the code from module A will be executed as long as the code from module B runs into the import statement. A module is A:
# module A
a = 100
print('Hello, this is module A... ')
print(a)
Copy the code
Module B:
# module B
from package01 import A
b = 200
print('Hello, this is module B... ')
print(b)
Copy the code
When module B is run, the output is as follows:
Hello, this is Module A... 100 Hello, this is Module B... 200Copy the code
What if we have some code in module A that we don’t want to run directly when it’s imported into B, but can run directly when we run it directly from module A? If name== ‘main:’ if name== ‘main:’
# module A
a = 100
print('Hello, this is module A... ')
if __name__=='__main__':
print(a)
Copy the code
Module B is executed directly without modification. The output is as follows:
Hello, this is Module A... Hello, this is Module B... 200Copy the code
See, the value of A in module A is no longer printed. So, when you want to import a module, but you don’t want the parts of the module to be executed directly, you can put that part of the code inside “if __name__==’__main__’:”.
33. What are the uses of the help() and dir() functions in Python?
Both the help() and dir() functions are directly accessible from the Python interpreter and are used to view merged dumps of built-in functions. The help() function: The help() function is used to display docstrings and to view usage information related to modules, keywords, properties, and so on. Dir () function: The dir() function is used to display the defined symbol.
34. Do YOU free all memory when exiting Python?
The answer is No. Modules that loop through references to other objects or objects from a global namespace are not completely released when Python exits. In addition, the memory portion reserved by the C library is not freed. On exit, Python attempts to unallocate/destroy all other objects because of its efficient cleanup mechanism.