Sun Yingsha lost 4-0 to Chen Meng, which is abnormal? Don’t just watch the big score.

Some people watch the game and only watch the "big score".

In the WTT Macau Championship in 2023, Sun Yingsha lost 4-0 to Chen Meng in the women’s singles semi-final, and some people felt abnormal.

Last week’s WTT Xinxiang Championship was also a semi-final in women’s singles. Chen Meng lost 4-0 to Sun Yingsha. I wonder if they also felt abnormal?

In WTT Xinxiang Championship, Sun Yingsha scored 4-0 in Chen Meng, and the scores of the four sets were 11-5, 11-5, 11-7 and 11-6.

The score of four games failed to hit the key points, so it can be said that Chen Meng was completely defeated.

In the WTT Macau Championship, Sun Yingsha of Chen Meng scored 4-0, and the scores in four sets were 16-14, 11-6, 11-8 and 11-9.

Chen Meng and Sun Yingsha won the first game 10-9, and then Chen Meng tied the game and won five games in a row, only to win narrowly 16-14.

Sometimes, a small score per game is more telling.

Compare another women’s singles semi-final, although Manyu Wang won Wang Yidi 4-2, but this game is less suspense than Chen Meng’s 4-0 win over Sun Yingsha.

Manyu Wang 4-2 Wang Yidi, the score of six games: 7-11, 11-9, 11-9, 11-6, 5-11, 11-4.

Manyu Wang is tall and arm-length, and her movements will not be restored very quickly, which determines that her forehand and backhand cohesion will not be very smooth in the near stage. What Manyu Wang is afraid of is the players with fast convergence speed and not weak strength. In the face of fast players, she often has to quit the stage and exchange distance for reaction time.

Only with speed and strength, it is possible to penetrate Manyu Wang’s defense.

However, Wang Yidi is not a speed player, and her playing style also depends on the quality of her own veneer, which is similar to that of Manyu Wang.

However, compared with Wang Yidi, Manyu Wang has stronger rotation in returning the ball and higher veneer quality. From a fundamental point of view, Manyu Wang’s strength is higher than Wang Yidi’s.

Therefore, regardless of Manyu Wang’s four-to-several win, as long as it is a best-of-seven game, the chance of surprise is very small. Under normal circumstances, it is only a matter of time before Manyu Wang defeats Wang Yidi.

However, in the semi-final between Chen Meng and Sun Yingsha, after 9-9 in the fourth game, if it wasn’t Chen Meng’s serve wheel and Chen Meng’s serve, but Sun Yingsha reversed the game, the result of the following game would be hard to say.

Before the game, when analyzing the match between Chen Meng and Sun Yingsha, I said: Chen Meng was in a particularly good state this time, and Sun Yingsha experienced high-intensity events in a row, and her state declined, which may add a little suspense to this semi-final.

For players with equal strength, the recent competitive state of the players is very important, and the players’ improvisation has a great influence on the results of the game.

Fan Zhendong’s 3-2 reversal of Lin Shidong in the men’s singles quarter-final of WTT Xinxiang Championship last week is a typical example. Lin Shidong got the match point 10-8 in the fourth game, but Fan Zhendong tied the score. Finally, Fan Zhendong reversed 13-11 to win the fourth game and was tied with a big score. Fan Zhendong won the deciding game 11-8. Moreover, Fan Zhendong finally won the men’s singles championship in Xinxiang Championship.

A game, one or two key points, may lead to a world of difference.

In table tennis, besides paying attention to who wins and who loses, it is better to look at the score of each game.

I’m interested in going to see the players’ respective service wheels, return balls and so on. If you are still interested, then learn about various services, techniques and tactics, table tennis equipment and so on.

So, don’t stop at the level of "0-4 lost". Players are improving, should old fans keep pace with the times?

As for some "extreme fans", they think that the players they like can only win but not lose. Losing is abnormal and fishy. Don’t apply the "rice circle" to competitive sports. Isn’t it good to chase the stars in the entertainment circle?

Technical password of generated image AI

In the past few years, artificial intelligence (AI) has made great progress, and AI’s new products include AI image generator. This is a tool that can convert input statements into images. There are many AI tools for text-to-image conversion, but the most prominent ones are DALL-E 2, Stable Diffusion and Midjourney.

DALL-E 2 is developed by OpenAI and the project of chatgpt is complementary. It generates images through a paragraph of text description. Its GPT-3 converter model trained with more than 10 billion parameters can interpret natural language input and generate corresponding images.

DALL-E 2 mainly consists of two parts-converting user input into a representation of an image (called Prior), and then converting this representation into an actual photo (called Decoder).

The text and images used in it are embedded in another network called CLIP (Contrast Language-Image Pre-training), which is also developed by OpenAI. CLIP is a neural network that returns the best title for the input image. What it does is the opposite of what DALL-E 2 does-it converts images into text, while DALL-E 2 converts text into images. The purpose of introducing CLIP is to learn the connection between visual and text representation of objects.

DALL-E 2′ s job is to train two models. The first one is Prior, which accepts text labels and creates CLIP image embedding. The second is Decoder, which accepts CLIP image embedding and generates images. After the model training is completed, the reasoning process is as follows:

  • The input text is converted into CLIP text embedding using neural network.

  • Use Principal Component Analysis to reduce the dimension of text embedding.

  • Create an image embedding using text embedding.

  • After entering the Decoder step, the diffusion model is used to embed the image into an image.

  • The image is enlarged from 64×64 to 256×256, and finally enlarged to 1024×1024 by using convolutional neural network.

Stable Diffusion is a text-to-image model, which uses CLIP ViT-L/14 text encoder and can adjust the model through text prompts. It separates the imaging process into a "diffusion" process at runtime-starting from the noisy situation, gradually improving the image until there is no noise at all, and gradually approaching the provided text description.

Stable Diffusion is based on Latent Diffusion Model(LDM), which is a top-notch text-to-image synthesis technology. Before understanding the working principle of LDM, let’s look at what is diffusion model and why we need LDM.

Diffusion Models, DM) is a generation model based on Transformer, which samples a piece of data (such as an image) and gradually increases the noise over time until the data cannot be recognized. This model tries to return the image to its original form, and in the process, it learns how to generate pictures or other data.

The problem of DM is that powerful DM often consumes a lot of GPU resources, and the cost of reasoning is quite high due to Sequential Evaluations. In order to train DM on limited computing resources without affecting its quality and flexibility, Stable Diffusion applies DM to powerful Pre-trained Autoencoders.

On this premise, the diffusion model is trained, which makes it possible to achieve an optimal balance between reducing complexity and preserving data details, and significantly improves the visual reality. The cross attention layer is introduced into the model structure, which makes the diffusion model a powerful and flexible generator and realizes the high-resolution image generation based on convolution.

Midjourney is also a tool driven by artificial intelligence, which can generate images according to the user’s prompts. MidJourney is good at adapting to the actual artistic style and creating images with any combination of effects that users want. It is good at environmental effects, especially fantasy and science fiction scenes, which look like the artistic effects of games.

DALL-E 2 uses millions of image data for training, and its output results are more mature, which is very suitable for enterprises to use. When there are more than two characters, the image generated by DALL-E 2 is much better than that generated by Midjourney or Stable Diffusion.

Midjourney is a tool famous for its artistic style. Midjourney uses its Discord robot to send and receive requests for AI servers, and almost everything happens on Discord. The resulting image rarely looks like a photo, it seems to be more like a painting.

Stable Diffusion is an open source model that everyone can use. It has a good understanding of contemporary art images and can produce works of art full of details. However, it needs to explain the complex prompt. Stable Diffusion is more suitable for generating complex and creative illustrations. However, there are some shortcomings in creating general images.