An enjoyed kernel apprentice

March 13, 2014

A six classes OS kernel development course

Filed under: Basic Knowledge,kernel — colyli @ 12:17 pm

Since year 2010 after I joined Taobao (a subsidiary of Alibaba Group), I help my employer to build a Linux kernel team, to maintain in-house Linux kernel and optimize system performance continuously. The team grew from 1 person to 10 persons in the next 2 years, we made some successful stories by internal projects, while having 200+ patches merged into upstream Linux kernel.

In these 2 years, I found most of programmers had just a little concept on how to write code to cooperate with Linux kernel perfectly. And I found I was not the only person had similar conclusion. A colleague of mine, Zhitong Wang, a system software engineer from Ali Cloud (another subsidiary company of Alibaba Group), asked me whether I had interest to design and promote a course on OS kernel development, to help other junior developers to write better code on Linux servers. We had more then 100K real hardware servers online, if we could help other developers to improve 1% performance in their code, no doubt it would be extremely cool.

Very soon, we agreed on the outline of this course. This was a six classes course, each class taking 120 ~ 150 minutes,

 

  • First class: Loading Kernel

This class introduced how a runnable OS kernel was loaded by boot loader and how the first instruction of the kernel was executed.

  • Second class: Protected Mode Programming

This class introduced very basic concept on x86 protect mode programming, which was fundamental to rested four classes.

  • Third class: System Call

This class explained how to design and implement system call interface, how priority transfer was happened.

  • Forth class: Process scheduling

We expected people was able to understand how a simplest scheduler was working and how context switch was made.

  • Fifth class: Physical Memory Management

In this class people could have a basic idea that how memory size was detected, how memory was managed before buddy system initialized, how buddy and slab system working.

  • Sixth class: Virtual Memory Management

Finally there were enough back ground knowledge to introduce how memory map, virtual memory area, page fault was designed and implemented, there was also a few slide pages introduces TLB and huge pages.

 

In next 6 months, Zhitong and I finished first version of  all slides. When Alibaba training department knew we were preparing an OS kernel development training, they helped us to arrange time slots both in Beijing and Hangzhou (Alibaba Group office location). We did the first wave training in 4 months, around 30 persons attended each class. We received a lot of positive feed back beyond our expectation. Many colleagues told me they were too busy to attend all these six classes, and required us to arrange this course again.

This was great encouragement to us. We knew the training material could be better, we yet had better method to make audience understand kernel development more. By this motivation, with many helpful suggestions from Zhitong, I spent half year to re-write all slide pages for all six classes, to make the materials to be more logical, consistent and scrutable.

Thanks to my employer, I may prepare course material in working hours, and accomplish the second wave training earlier. In last two classes, the teaching room was full, even some people had to stand for hours. Again, many colleagues complained they were too busy to miss some of the classes, and asked me to arrange another wave sometime in future.

This is not an easy task, I gave 6 classes both in Beijing and Hangzhou, including Q&A it was more than 30 hours. But I decide to arrange another wave of the course again, maybe start in Oct 2014, to show my honor to all people who helped and encouraged me :-)

Here you may find all slide files for these six classes, they are written in simplified Chinese.
[There are more than enough document in English, but in Chinese the more the better ]

* Class 1: osdev1-loading_kernel
* Class 2: osdev2-protected_mode_programming
* Class 3: osdev3-system_call
* Class 4: osdev4-process_scheduling
* Class 5: osdev5-physical_memory_management
* Class 6: osdev6-virtual_memory_management
 

August 23, 2013

openSuSE Conference 2013 in Thessaloniki, Greece

Filed under: Great Days — colyli @ 6:18 am

osc-logo

In recent months, I worked on hard disk I/O latency measurement for our cloud service infrastructure. The initial motivation is to identify almost-broken-but-still-work hard disks, and isolate them from online services. In order to avoid modify core kernel data structure and execution paths, I hacked device mapper module to measure the I/O latency. The implementation is quite simple, just add timestamp “unsigned long start_time_usec” into struct dm_io, when all sub-io of dm_io completed, calculate latency and store it into corresponded data structure.

4 +++ linux-latency/drivers/md/dm.c

13 @@ -60,6 +61,7 @@ struct dm_io {
14 struct bio *bio;
15 unsigned long start_time;
16 spinlock_t endio_lock;
17 + unsigned long start_time_usec;
18 };

After running on around 10 servers from several different cloud services, there are some interesting data and situation observed, which may be helpful for us to identify the relationship between I/O latency and hard disk healthy condition.
It happens that openSuSE Conference 2013 is about to take place in Thessaloniki, Greece, a great opportunity for me to share the interesting data to friends and other developers from openSuSE community.

 

olympic_museum-1 olymplic_museum-2
Thessaloniki is a beautiful costal city, it is an enjoyed experience that openSuSE conference happens here. The venue is a sports museum (a.k.a Olympic Museum), very nice place for a community conference. When I entered the museum one day early, I saw many volunteers (someone I knew from SuSE and someone I didn’t know who were local community members), they were busy to prepare many stuffs from meeting rooms to booth. I joined to help a little for half day, then back to hotel to prepare my talk slide.

 

prepare-0prepare-3 prepare-2

prepare-5 prepare-4 prepare-1

 

This year, I did better, the slide was accomplished 8 hours before my talk, last time in Prague it was 4~5 hours before :-) Much more people showed up beyond my expectation, during and after the talk, a lot communication happened. Some people also suggested me to update the data in next year openSuSE conference. This project is still in quite early stage, I will continue to update information in next time.

coly-talk

This year, I didn’t meet many friends who live in German or Czech Republic, maybe it is because long distance travel and too hot weather. Fortunately one hacker I met this year helped me a lot, he is Olive Neukum. We talked a lot on seq_lock implementation in Linux kernel, he inspired me an idea on non-lock-confliction implementation for seq_lock when reading clock resource in ktime_get(). The idea is simple: if seq number changed after reading the data, just ignore the data and return, do not try again. Because in latency sampling, there is no need to measure I/O latency for every I/O request, if the sampling may be random (lock conflict could be treat as kind of random), the statistic result is still reliable.  Oliver also gave a talk on “speculative execution”, introduced basic idea of speculative execution and the support in glibc and kernel. This is one of the most interesting talks IMHO :-)

oliver
During the conference, there were many useful communication happened, e.g. I talked with Andrew Wafaa about possible ARM cooperation in China, with Max Huang about open source promotion, with Izabel Valverde for travel support program. This year, there was a session talked about openSuSE TSP (Travel Support Program) status update. IMHO, all updates about TSP makes this program to be more sustainable, e.g. more explicit travel policy, asking sponsored people to help as volunteer for organization. Indeed, before TSP mentioned this update, I did in this way for years :-) Thanks to openSuSE Travel Support Program, to help me to meet community friends every year, and have the opportunity to share ideas with other hackers and community members.

volunteer ralf
Like Ralf Flaxa said, openSuSE community has its own dependent power and grows healthily. openSuSE conference 2013 is the first time that it happens in a city where no SuSE office located.  I saw many people from local community helped on vane preparation, organization, management, only a few people are SuSE employees. This is impressive, I really feel the power of community, people just show up, take their own role and lead. Next year openSuSE conference 2014 will be in Dubrovnik of Croatia, I believe the community will continue to organize another great event, of cause I will join and help in my way.

 

[1] slide of my talk can be found here, http://blog.coly.li/docs/osc13-coly.pdf

[2] live video of my talk, http://bambuser.com/v/3754758 starts from 1:17:30

October 30, 2012

openSUSE Conference 2012 in Prague

Filed under: Great Days — colyli @ 4:59 am

In Oct 20 ~ 23, I was invited and sponsored by openSUSE to give a talk on openSUSE Conference (OSC2012). The venue was Czech Technical University in Prague, Czech Republic, a beautiful university (without wall) in a beautiful city.

It was 5 years ago since last time I visited Prague (for SuSE Labs conference), as well as 3 years ago since last time I attended openSUSE conference as speaker, which was OSC2009 in Nuremberg. In OSC 2009, the topic of my talk was “porting openSUSE to MIPS platform”, this was a Google summer of code project accomplished by Eryu Guan (being Redhat employee after graduated). At that time, almost all active kernel developers from China were hired by multi-national companies, few local company (not include university and institute) in China contributed patch to Linux kernel. In year 2009, after Wensong Zhang (original author of Linux Virtual Server) joined Taobao, this local e-business company was willing to optimize Linux kernel for their online servers and contribute patches back to Linux kernel community. IMHO, this was a small but important change in China, it should be my honor if I was able to be involved into this change. Therefore in June 2010, I left SuSE Labs and joined Taobao, to help this company to build a kernel engineering team.

From the first day since the team was built, the team and I applicate many ideas which I learned/learn from SuSE/openSUSE kernel engineering. E.g. how to corporate with kernel community, how to organize kernel patches, how to integrate kernel patches and kernel tree with build system. After 2+ years, with great support from Wensong and other senior managers, Taobao kernel team grows to 10 persons, we contribute 160+ patches into Linux upstream kernel, becoming one of the most active Linux kernel development teams in China. Colleagues from other departments and product lines recognize the value of Linux kernel maintenance and performance optimization, while we open all projects information and kernel patches to people outside the company. With the knowledge learned from openSUSE engineering, we lay a solid foundation on Taobao kernel development/maintenance procedure.

This time the topic of my talk is “Linux kernel development/maintenance in Taobao — what we learn from openSUSE engineering“, this is an effort to say “Thank you” to openSUSE community. Thanks to openSUSE conference organization team, I have the opportunity to introduce what we learn from openSUSE and contribute to community in past 2+ years. The slide file can be downloaded here, if any one is interested on this talk.

Back to openSUSE conference 2 years later is a happy and sweet experience, especially meeting many old friends whom we worked together for years. I met people from YaST team, server team and SuSE Labs, as well as some ones no longer serve for SUSE but still active in opneSUSE community. Thanks to the conference organization team again, to make us have the rare and unique chance to do face-to-face communication, especially for community members like me who is not located in Europe and has to take oversea travel.

The conference venue in first 2 days was inside building of FIT ČVUT (Faculty of Information Technology of Czech Technical University in Prague). There were many meeting rooms available inside the build, so that dozen of talks, seminar, BOF were able to happen concurrently. I have to say, in order to accommodate 600+ registered audience, choosing such a large venue is really a great idea. In Monday the venue moved to another building, though there were less meeting room, the main room (where my talk was in) was bigger.

 

  

CPU power talk by Thomas Renninger

Cgroup usage by Petr Baudiš

After talking with many speakers out of the meeting room, and chair a BOF of Linux Cgroup (control group, especially forcus on memory and I/O control), there were some non-linux-kernel talks abstracted me quite a lot. Though all the slides and video records can be found from internet (thanks to organization team again ^_^), I would like to share the talk by Thijs de Vries, which impressed me among many excellent talks.

  

 

Thijs de Vries: Gamification – using game elements and tactics in a non-game context

Thijs de Vries was from a game design company (correct me if I am wrong), in this talk he explained many design principles and practices in the company. He mentioned when they planed to design a game, there were 3 objects to considerate, which in turn were project, procedure and product. A project was built for the plan, a procedure was set during the project execution, a product was shipped as the output of the project. I do like this idea for design, it’s something new and helpful to me. Then he introduced how to make people have fun, involved into the game, and understand the knowledge from the game. In Thijs’ talk, it seems designing funny rules and goals is not difficult, but IMHO an educational game with funny rules and social goals is not easy to design even with every hard and careful effort. From his talk, I strongly felt innovation and genius of design (indeed not only game) from a different way which I never met and imagined before.

Beside orthodox conference talks, a lot conversation also happened outside the meeting room. Alexander Graf mentioned the effort to enable SUSE Linux on ARM boxes, which was a very interesting topic for people who looking for low power hardware like me. For some workload from Taobao, powerful x86 CPU does not help any more to performance, replacing them with low power ARM CPU may save lot of money on power and thermal expenditure. Currently the project seems going well, I hope the product may be shipped in the near future. Jiaju Zhang also introduced his proposal on a distributed clustering protocol which called Booth. We talked about the idea of Booth last year, it was good to see this idea came to a real project step by step. As a file system developer, some discussion about btrfs and OCFS2 happened with SuSE Labs people as well. For btrfs it was unanimous that this file system was not ready for large scale deployment yet, people from Fujitsu, Oracle, SuSE, Redhat, and other organizations were working hard to improve the quality to product usage. For OCFS2, we talked about file system freeze among cluster, there was little initial effort since last 2 years, a very incipient idea was discussed on how to freeze write I/O among each node in the cluster. It seems OCFS2 is in maintenance status currently, hope someday I (or someone else) have time and interest to work on this interesting and useful feature.
This article just part of my experience from openSUSE conference. OSC2012 was well organized, included but not limited to schedule, venue, video record, meal, travel, hotel, .etc. Here I should thank several people who help me to attend the great conference once again,

  • People behind cfp@opensuse.org, who accept my proposal
  • People behind travel-support@opensuse.org, who kindly offer the sponsorship for my travel
  • Stella Rouzi, who helped me on visa application
  • Andreas Jaeger, Lars Muller, and other people who encourage me to give a talk on OSC2012.
  • Alexader Graf and others who review my slide

Finally, if you have interest to find more information about openSUSE conference 2012, these URL may be informative,

Conference schedule: http://bootstrapping-awesome.org/schedule/
Conference video: http://en.opensuse.org/Archive:Conference_video_2012
Slide of my talk: http://blog.coly.li/docs/osc12-coly-taobao.pdf
Video of my talk: http://blip.tv/openSUSEtv/osc12-kernel-development-maintenance-in-taobao-6415082

 

February 7, 2011

alloc_sem of Ext4 block group

Filed under: File System Magic — colyli @ 11:53 am

Yesterday Amir Goldstein sent me an email for a deadlock issue. I was in Chinese New Year vacation, could not have time to check the code (also I know I can not answer his question with ease). Thanks to Ted, he provides a quite clear answer. I feel Ted’s answer is also very informative to me, I copy&past the conversation from linux-ext4@vger.kernel.org to my blog. The copy rights of the bellowed referenced text belong to their original authors.

On Sun, Feb 06, 2011 at 10:43:58AM +0200, Amir Goldstein wrote:
> When looking at alloc_sem, I realized that it is only needed to avoid
> race with adjacent group buddy initialization.
Actually, alloc_sem is used to protect all of the block group specific
data structures; the buddy bitmap counters, adjusting the buddy bitmap
itself, the largest free order in a block group, etc.  So even in the
case where block_size == page_size, alloc_sem is still needed!
- Ted

November 22, 2010

Three Practical System Workloads of Taobao

Filed under: kernel — colyli @ 9:44 pm

Days ago, I gave a talk on an academic seminar at ACT of Beihang University (http://act.buaa.edu.cn/). In my talk, I introduced three typical system workloads we (a group of system software developers inside Taobao) observed from the most heavily used/deployed product lines. The introduction was quite brief, no detail touched here. we don’t mind to share what we did imperfectly, and we would like to open mind to cooperate with open source community and industries to improve :-)

If you find there is anything unclear or misleading, please let me know. Communication makes things better most of time :-)

[The slide file can be found here]

October 16, 2010

China Linux Storage and File System Workshop 2010

Filed under: File System Magic,Great Days,kernel — colyli @ 12:41 pm

[CLSF 2010, Oct 14~15, Intel Zizhu Campus, Shanghai, China]

Similar to Linux Storage and File System Summit in north America, China Linux Storage and File System Workshop is a chance to make most of active upstream I/O related kernel developers get together and share their ideas and current status.

We (CLSF committee) invited around 26 persons to China LSF 2010, including community developers who contribute to Linux I/O subsystem, and engineers who develop their storage products/solutions based on Linux. In order to reduce travel cost to all attendees, we decided to co-locate China LSF with CLK (China Linux Kernel Developers Conference) in Shanghai.

This year, Intel OTC (Opensource Technology Center) contributed a lot to the conference organization. She kindly provided free and comfortable conference room, donated employees to help the organization and preparation, two intern students acted as volunteers helping on many trivial stuffs.

CLSF2010 is a two days’ conference,  here are some interesting topics (IMHO) which I’d like to share on my blog. I don’t understand very well on every topic, if there is any error/mistake in this text, please let me know. Any errata is welcome :-)

– Writeback, led by Fengguang Wu

– CFQ, Block IO Controller & Write IO Controller, led by Jianfeng Gui, Fengguang Wu

– Btrfs, led by Coly Li

– SSD & Block Layer, led by Shaohua Li

– VFS Scalability, led by Tao Ma

– Kernel Tracing, led by Zefan Li

– Kernel Testing and Benchmarking, led by Alex Shi

Beside the above topics, we also had ‘From Industry’ sessions, engineers from Baidu, Taobao and EMC shared their experience when building their own storage solutions/products based on Linux.

In this blog, I’d like to share the information I got from CLSF 2010, hope it could be informative ;-)

Write back

The first session started from Write back,  which is quite hot recently. Fengguang does quite a few work on it, and kindly volunteer to lead this session.

An idea was brought out to limit the dirty page ratio by per-process. Fengguang made a patch and shared a demo picture with us. When dirty pages exceeds the up-limit specified to a process, kernel will write back the dirty pages of this process smoothly, until the dirty page numbers reduced to a pre-configured rate. This idea is helpful to processes hold a large number of dirty pages.  Some people concerned this patch didn’t help the condition that a lot of processes and each hold a few dirty pages. Fengguang replied for server application, if this condition happened, the design might be buggy.

People also mentioned now the erase block size of SSD increased from KBs to MBs, adopting a bigger page numbers in writing out may help on the whole file system performance. Engineers from Baidu shared their experience,

– Increase the write out size from 4MB to 40MB, they achieved 20% performance improvement.

– Use extent based file system, they got better continuous on-disk layout and less memory consume for metadata.

Fengguang also shared his idea on how to control process to write pages, the original idea was control dirty pages by I/O (calling writeback_inode(dirtied * 3/2)), after several times improvement it became wait_for_writeback(dirteid/throttle_bandwidth). By this means, the I/O bandwidth of dirty pages to a process also got controlled.

During the discussion, Fengguang pointed out the event that a page got dirty was more important than whether a page was dirty. Engineers from Baidu said, in order to avoid a kernel/user space memory copy during file read/write, while using kernel page cache, they used mmap to read/write file pages other than calling read/write syscalls. In this case, a page writable in mmap is initialized as read only firstly, when the writing happened a page fault was triggered, then kernel knew this page got dirty.

It seems many ideas are under working to improve the writeback performance, including active writeback in back group, and some cooperation with underlying block layer. My current focus is not here, anyway I believe people in the room could help a bit out :-)

Btrfs

Recently, there are many developers in China start to work on btrfs, e.g. Xie Miao, Zefan Li, Shaohua Li, Zheng Yan, … Therefore we specially arranged a two hours session for btrfs. The main purpose of the btrfs session is to share what we are doing on btrfs.

Most of people agreed that btrfs needed a real fsck tool now. Engineers from Fujitsu said they had a plan to invest people on btrfs checking tool development. Miao Xie, Zefan Li, Coly Li and other developers suggested to consider the pain of fsck from beginning,

– memory consuming

Now a 10TB+ storage media is cheap and common, for large file system built on them, doing fsck needs more memory to hold meta data (e.g. bitmap, dir blocks, inode blocks, btree internal blocks …). For online fsck, consuming too many memory in file system checking will have negative performance impact to page cache or other applications. For offline fack, it was not a problem, now online fsck is coming, we have to encounter this open question now :-)

– fsck speed

A tree structured file system has (much) more meta data than a table structured file system (like Ext2/3/4), which may mean more I/O and more time. For a 10TB+ 80% full file system, how to reduce the file system checking time will be a key issue, especially for online service workload. I proposed an solution, allocating metadata to SSD or other higher seek speed device, then checking on metadata may have no (or a little) seeking time, which results a faster file system checking.

Weeks before, two intern students Kunshan Wang and Shaoyan Wang, they worked with me, wrote a very basic patch set (including kernel and user space code), to allocate metadata from a higher seek time device. This patch set is compiling passed, the students did a quite basic verification on meta data allocation, the patch worked. I don’t review the patch yet, by a quite rough code checking, there is much improvement needed. I post this draft patch set to China LSF mailing list, to call for more comments from CLSF attendees. Hope next month,  I can have time to improve the great job by Kunshan and Shaoyan.

Zefan Li said there was a todo list of btrfs, a long term task was data de-duplication, and a short term task was allocating data from SSD. Herbert Xu pointed out, the underlying storage media impacted file system performance quite a lot, from a benchmark from Ric Wheeler of Redhat, on Fusion IO high end PCI-E SSD, there is almost no performance difference between well known file system like xfs, ext2/3/4 or btrfs.

People also said that these days, the code review or merge of btrfs patches were often delayed, it seemed btrfs maintainer was too busy to handle the community patches. There was reply from the maintainer that the condition will be improved and patches would be handled in time, but there was no obvious improvement so far. I can understand when a person has more emergent task like kernel tree maintenance, he or she does have difficulty to handle non-trivial patches in time if this is not his or her highest priority job. From CLSF, I find more and more Chinese developers start to work on btrfs, I hope they should be patient if their patches don’t get handled in time :-)

Engineers from Intel OTC mentioned there is no btrfs support from popular boot loader like Grub2. For me, IIRC there is someone working on it, and the patches are almost ready. Shaohua mentioned why not loading the Linux kernel by a linux kernel, like the kboot project does. People pointed out there still should be something to load the first Linux kernel, this was a chicken-and-egg question :-) My point was, it should not be very hard to enable the btrfs support in boot loader, a small Google Summer of Code project could make it. I’d like to port and merge the patches (if they are available) to openSUSE since I maintain openSuSE grub2 package.

Shaohua Li shared his experience on btrfs development for Meego project, he did some work on fast boot and read ahead on btrfs. Shaohua said there was some performance advance observed on btrfs, and the better result was achieved by some hacking, like a big read ahead size, a dedicated work queue to handle write request and using a big write back size. Fengguang Wu and Tao Ma pointed out this might be a general hacking, because Ext4 and OCFS2 also did the similar hacking for better performance.

Finally Shaohua Li pointed out there was a huge opportunity to improve the scalability of btrfs, since there still were many global locking, cache missing existing in current code.

SSD & Block Layer

This was a quite interesting session led by Shaohua Li. Shaohua started the session by some observed problems between SSD and block layer,

– Throughput is high, like network

– Disk controller gap, no MSI-x…

– Big locks, queue lock, scsi host lock, …

Shaohua shared some benchmark result showed that for high IOPS the interrupt over loaded on a single CPU,  even on a multi processors system, the interrupts could not be balanced to multi processors, which was a bottleneck to handle interrupts invoked by I/O of SSD.  If a system had 4 SDDs, a processor ran 100% to handle the interrupts and how throughput was around 60%-80%.

A workaround here was polling. Replacing interrupt by blk_iopoll could help the performance number, which could reduce processor overload on interrupts handling. However, Herbert Xu points out the key issue was current hardware didn’t support multi-queue to handle same interrupts. Different interrupts could be balanced to every processor in the system, but unlike network hardware, same interrupt could not be balanced into multi-queue and only be handled by a single processor. A hardware multi-queue support should be the silver bullet.

For SSD like Fusion IO produces, the IOPS could be one million + IOPS on a single SSD device, the parallel load is much more higher than on traditional hard disk. Herbert, Zefan and I agreed that some hidden race defect should be observed very soon.

Right now, block layer is not ready for such high parallel I/O load.  Herbert Xu pointed out that lock contention might be a big issue to solve. The source of the lock contention was cache consistence cost for global resource which protected by locking. Convert the global resource to a per-CPU local data might be a direction to solve the locking contention issue. Since Jens and Nick can touch Fusion IO devices more conveniently, we believe they can work with other developers to help out a lot.

Kernel Tracing

Zefan Li helped to lead an interesting session about kernel tracing. I don’t have any real understanding for any kernel trace infrastructure, for me the only tool is printk(). IMHO printk is the best trace/debug tool for kernel programming. Anyway, debugging is always an attractive topic to curious programmer, and I felt Zefan did his job quite well :-)

The OCFS2 developer Tao Ma, mentioned OCFS2 currently using a printk wrapper trace code, which was not flexible and quite obsolete, OCFS2 developers were thinking of using a trace infrastructure like ftrace.

Zefan pointed out using ftrace to replace previous printk based trace messages should be careful, there might be ABI (application binary interface) issue for user space tools. Some user space tools work with kernel message (one can check kernel message with kmesg command). An Intel engineer mentioned there was accident recently that a kernel message modification caused the powertop tools didn’t work correctly.

For file system trace, the situation might be easier. Because most of the trace info was used by file system developers or testers, the one adding trace info into file system code might ignore the ABI issue with happy. Anyway, it was just “might”, not “be able to”.

Zefan said there was patch introduced TRACE_EVENT_ABI, if some trace info could form a stable user space ABI they could be announced by TRACE_EVENT_ABI.

This session also discussed how ftrace working. Now I know the trace info stored in a ring buffer. If ftrace is enabled but the ring buffer is not, user is still not able to receive trace info. People also said that a user space trace tool would be necessary.

Someone said perf tool currently getting more and more powerful, it was probably that integrating trace function into perf. Linux kernel only needs one trace tool,  some people in this workshop think it might be perf (for me, I have no point, because I use neither).

Finally Herbert again suggested people to pay attention on scalability issues when adding trace point. Currently the ring buffer was not a per-CPU local area, adding trace point might introduce performance regression for existing optimized code.

From Industry

In last year’s BeijingLSF, we invited two engineers from Lenovo. They shared their experience using Linux as the base system for their storage solution. This session had a quite positive feed back, and all committee member suggested to continue the From Industry sessions again this year.

For ChinaLSF2010, we invited 3 companies to share their ideas with other attendees, engineers from Baidu, Taobao and EMC  led three interesting sessions, people had chance to know which kind of difficulties they encountered, how they solved the problems and what they achieved from their solution or work around. Here I share some interesting points on my blog.

From Taobao

Engineers from Taobao also shared their works based on Linux storage and file systems,  the projects were Tair and TFS.

Tair is a distributed cache system used inside Taobao, TFS is a distributed user space file system to store Taobao goods pictures.  For detail information, please check http://code.taobao.org :-)

From EMC

Engineers from EMC shared their work on file system recovery, especially file system checking. Tao Ma and I, we also mentioned what we did in fsck.ocfs2 (ocfs2 file system checking tool). The opinion from EMC was, even an online file system checking was possible, the offline fsck was still required. Because an offline file system checking could check and fix a file system from a higher level scope.

Other points were also discussed in previous sessions, including memory occupation, time consuming …

From Baidu

This was the first time I knew people from Baidu, and had chance to knew what they did on Linux kernel. Thanks to Baidu kernel team, we had opportunity to know what they did in the past years.

Guangjun Xie from Baidu started the session by introducing Baidu’s I/O workload, most of the I/O were indexing and distributed computing related, reading performance was more desired then writing performance. In order to reduce memory copying in data reading, they used mmap to read data pages from underlying media to page cache.  Accessing the page via mmap could not use the advantage of Linux kernel page cache replacement algorithm, while Baidu didn’t want to implement a similar page cache within user space. Therefore they used a not-beautiful-but-efficient workaround, they implemented an in-house system call, the system call updated the page (returned by mmap) in kernel’s page LRU. By this means, the data page could be management by kernel’s page cache code. Some people pointed out this was mmap() + read ahead. From Baidu’s benchmark their effort increased 100% searching workload performance on a single node server.

Baidu also tried to use bigger block size of Ext2 file system, to make data block layout more continuous, also from their performance data the bigger block size also resulted a better I/O performance. IMHO, a local mod ocfs2 file system may achieve a similar performance, because the basic block unit of ocfs2 is a cluster, the cluster size could be from 4KB to 1MB.

Baidu also tried to compress/decompress the data when writing/reading from disk, since most of Baidu’s data was text, the compress rate was quite satisfied high. They even used a PCIE compressing card, the performance result was pretty good.

Guangjun also mentioned, when they used SATA disks, some I/O error was silence error, for meta data, this was a fatal error, at least meta data checksum was necessary. For data checksum, they did it in application level.

Conclusion

Now comes to the last part of this blog, let me give my own conclusion to ChinaLSF 2010 :-)

IMHO, the organization and preparation this year is much better than BeijingLSF 2009, people from Intel Shanghai OTC contribute a lot of time and effort before/during/after the workshop, without their effort, we can not have such a successful event. Also a big thank you should give our sponsor EMC China, they not only sponsor conference expense, but also send engineers to share their development experience.

Let’s wait for next year for ChinaLSF 2011 :-)

July 25, 2010

Don’t waste your SSD blocks

Filed under: File System Magic — colyli @ 11:20 pm

These days, one of my colleagues asked me a question, he formatted an ~80G Ext3 file system on SSD. After mounted the file system, the df output was,

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 77418272 184216 73301344 1 /mnt

As well as from fdisk output, it said,

Device Boot Start End Blocks Id System
/dev/sdb1
7834 17625 78654240 83 Linux

From his observation, before format the SSD, there was 78654240 1k blocks available on the partition, after the format, 77418272 1k blocks could be used, which means almost 1G space unused from the partition.

A more serious question was, from the output of df, used blocks + available blocks = 73485560, but the file system had 77418272 blocks — 4301144 1k blocks disappeared ! This 160G SSD costs him 430USD, he complained around 15USD was payed for nothing.

IMHO, this is a quite interesting question, and asked by many people for many times. This time, I’d like to spend some time to explain how the blocks are wasted, and how to make better usage of every block on the SSD (since it’s quite expensive).

First of all, better storage usage depends on the I/O pattern in practice. This SSD is used to store large file for random I/O, especially most of the I/O (99%+) is reading on random file offset, the writing can almost be ignored. Therefore, it is wanted to use every available block to store a very big files on the Ext3 file systems.

If only using the default command line to format an Ext3 file system like “mkfs.ext3 /dev/sdb1″, mkfs.ext3 will do the following things for block allocation,

- Allocates reserved blocks for root user, to avoid non-privilege users using up all disk space.

- Allocates metadata like superblock, backed superblock, block group descriptors, block bitmap for each block group, inode bitmap for each block group, inode table for each block group.

- Allocates reserved block group blocks for offline file system extension.

- Allocates blocks for journal

Since the SSD is only for data storage, no operation system installed on it, and writing performance is disregarded here, and no requirement for further file system size extension, and only a few files are stored on the file systems, some blocks allocation is unnecessary and useless,

- Journal blocks

- Inodes blocks

- Reserved group descriptor blocks for file system resize

- Reserved blocks for root user

Let’s run dumpe2fs to see how many blocks are wasted on the above items, I only list part of the output (outlines) here,

> dumpe2fs /dev/sdb1

Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          f335ba18-70cc-43f9-bdc8-ed0a8a1a5ad3
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              4923392
Block count:              19663560
Reserved block count:     983178
Free blocks:              19308514
Free inodes:              4923381
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      1019
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512

Filesystem created:       Tue Jul  6 21:42:32 2010
Last mount time:          Tue Jul  6 21:44:42 2010
Last write time:          Tue Jul  6 21:44:42 2010
Mount count:              1
Maximum mount count:      39
Last checked:             Tue Jul  6 21:42:32 2010
Check interval:           15552000 (6 months)
Next check after:         Sun Jan  2 21:42:32 2011
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      3ef6ca72-c800-4c44-8c77-532a21bcad5a
Journal backup:           inode blocks
Journal features:         (none)
Journal size:             128M
Journal length:           32768
Journal sequence:         0×00000001
Journal start:            0

Group 0: (Blocks 0-32767)
Primary superblock at 0, Group descriptors at 1-5
Reserved GDT blocks at 6-1024
Block bitmap at 1025 (+1025), Inode bitmap at 1026 (+1026)
Inode table at 1027-1538 (+1027)
31223 free blocks, 8181 free inodes, 2 directories
Free blocks: 1545-32767
Free inodes: 12-8192

[snip ....]

The file system block size is 4KB, which is different from the output block size of df and fdisk. In the above output, I mark the outlines with RED color. Now let’s look at the line for reserved block,

Reserved block count:     983178

These 983178 4K blocks are served for root user, since the system and user home is not on SSD, we don’t need to reserve these blocks.  Read mkfs.ext3(8), there is a parameter ‘-m’ to set reserved-blocks-percentage, set ‘-m 0′ to reserve zero block for privilege user.

From file system features line, we can see resize_inode is one of the default enabled feature,

Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file

resize_inode feature reserves quite a lot blocks for new extended block group descriptors, these blocks can be found from lines like,

Reserved GDT blocks at 6-1024

When resize_inode feature enabled, mkfs.ext3 will reserve some blocks after block group descriptor blocks, called “Reserved GDT blocks”.  If file system will be extended in future (e.g. the file system is created on a logical volume), these reserved blocks can be used for new block group descriptors. Now the storage media is SSD, not file system extension in future, we don’t have to pay money (on SSD, blocks means money) for this kind of blocks. To disable resize_inode feature, use “-O ^resize_inode” in mkfs.ext3(8).

Then look at these 2 lines for inode blocks,

Inodes per group:         8192
Inode blocks per group:   512

We only store no more than 5 files on the whole file systems,  but here 512 blocks in each block groups are allocated for inode table. There are 601 block groups, which means 512×601=307712 blocks (≈ 1.2GB space) wasted for inode tables.  Using ‘-N 16′ in mkfs.ext3(8) to specify only 16 inodes in the file system, though mkfs.ext3(3) at least allocate one inode table block in each block group (more then 16 inodes), we only wast 1 block other than 512 blocks for inode able now.

Journal size:             128M

If most of the I/O are readings while writing performance is ignored, and people are really care about space usage, the journal area can be reduced to minimum size (1024 file system blocks), for 4KB blocks Ext3, it’s 4MB: -J size=4M

By above efforts, there is around 4GB+ space back to use. If you really care about the space usage efficiency of your SSD, how about making the file system with:

mkfs.ext3 -J size=4M -m 0 -O ^resize_inode -I 16  <device>

Then you have chance to get more data blocks into usage on your expensive SSD :-)

June 30, 2010

Taobao joins open source

Filed under: Great Days — colyli @ 10:27 am

Taobao's open source commnity

Taobao's open source community

Today, Taobao announces its open source community  — http://code.taobao.org.

This is a historical day, a China local  internet and e-business leading company joins open source world by its practice approved activity.

The first project released on code.taobao.org is TAIR. Tair is a distributed, high performance key/value storage system, using in Taobao’s infrastructure for time.  Taobao is on the way to make more internal projects to be open source. Yes, talk is cheap, show the code !

If you are working on large scale website, with more than 10K server nodes, checking projects on code.taobao.org may help you to avoid making another wheel. Please, visit http://code.taobao.org, and join the community to contribute. I believe people can improve the community better and better. Currently, most of the expect developers are Chinese spoken, that’s why you can find square characters on the website. I believe more changes will come in future, because the people behind the community like continuously improvement :-)

Of cause there are some other contributions to open source community from Taobao can not be found on code.taobao.org, For example, I believer patches from Taobao will appear in Linux kernel changelog very soon :-)

June 27, 2010

Random I/O — Is raw device always faster than file system ?

Filed under: File System Magic — colyli @ 8:53 am

For some implementations of distributed file systems, like TFS [1], developers think storing data on raw device directly (e.g. /dev/sdb, /dev/sdc…) might be faster than on file systems.

Their choice is reasonable,

1, Random I/O on large file cannot get any help from file system page cache.

2, <logical offset, physical offset> mapping introduces more I/O on file systems than on raw disk

3, Managing metadata on other powerful servers avoid the necessary to use file systems for data nodes.

The penalty for the “higher” performance is management cost, storing data on raw device introduces difficulties like,

1, Harder to backup/restore the data.

2, Cannot do more flexible management without special management tools for the raw device.

3, No convenient method to access/management the data on raw device.

The above penalties are hard to be ignored by system administrators. Further more, the store of “higher” performance is not exactly true today,

1, For file systems using block pointers for <logical offset, physical offset> mapping, large file takes too many pointer blocks. For example, on Ext3, with 4KB block, a 2TB file needs around 520K+  pointer blocks. Most of the pointer blocks are cold in random I/O, which results lower random I/O performance number than on raw device.

2, For file systems using extent for <logical offset, physical offset> mapping, the extent blocks number depends on how many fragment a large file has. For example, on Ext4, with max block group size 128MB, a 2TB file has around 16384 fragment. To mapping these 16K fragment, 16K extent records are needed, which can be placed in 50+ extent blocks. It’s very easy to hit a hot extent in memory for random I/O on large file.

3, If the <logical offset, physical offset> mapping can be cached in memory as hot, random I/O performance on file system might not be worse than on raw device.

In order to verify my guess, I did some performance testing.  I share part of the data here.

Processor: AMD opteron 6174 (2.2 GHz) x 2

Memory: DDR3 1333MHz 4GB x 4

Hard disk: 5400RPM SATA 2TB x 3 [2]

File size: (create by dd, almost) 2TB

Random I/O access: 100K times read

IO size: 512 bytes

File systems: Ext3, Ext4 (with and without directio)

test tool: seekrw [3]

* With page cache

- Command

seekrw -f /mnt/ext3/img -a 100000 -l 512 -r

seekrw -f /mnt/ext4/img -a 100000 -l 512 -r

- Performance result


Device tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
Ext3 sdc 95.88 767.07 0 46024 0
Ext4 sdd 60.72 485.6 0 29136 0

- Wall clock time

Ext3: real time: 34 minutes 23 seconds 557537 usec

Ext4: real time: 24 minutes 44 seconds 10118 usec

* directio (without pagecache)

- Command

seekrw -f /mnt/ext3/img -a 100000 -l 512 -r -d

seekrw -f /mnt/ext4/img -a 100000 -l 512 -r -d

- Performance result


Device tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
Ext3 sdc 94.93 415.77 0 12473 0
Ext4 sdd 67.9 67.9 0 2037 0
Raw sdf 67.27 538.13 0 16144 0

- Wall clock time

Ext3: real time: 33 minutes 26 seconds 947875 usec

Ext4: real time: 24 minutes 25 seconds 545536 usec

sdf: real time: 24 minutes 38 seconds 523379 usec    (raw device)

From the above performance numbers, Ext4 is 39% faster than Ext3 on random I/O with or without paegcache, this is expected.

The result of random I/O on Ext4 and raw device, is almost same. This is a result also as expected. For file systems mapping <logical offset, physical offset> by extent, it’s quite easy to make most of the mapping records hot in memory. Random I/O on raw device has *NO* obvious performance advance then Ext4.

Dear developers, how about considering extent based file systems now :-)

[1] TFS, TaobaoFS. A distributed file system deployed for http://www.taobao.com . It is developed by core system team of Taobao, will be open source very soon.

[2] The hard disk is connected to RocketRAID 644 card via eSATA connecter into system.

[3] seekrw source code can be download from http://www.mlxos.org/misc/seekrw.c

April 30, 2010

a conversation on DLM lock levels used in OCFS2

Filed under: Basic Knowledge,File System Magic — Tags: — colyli @ 9:56 am

Recently, I had a conversation with Mark Fasheh, the topic was DLM (Distributed Lock Manager) levels used in OCFS2 (Oracle Cluster File System v2). IMHO, the talk is quite useful for a starter of OCFS2 or DLM, I list the conversation here, hope it could be informative. Thank you, Mark ;-)

Mark gave a simplified explanation on NL, PR and EX dlm lock levels used in OCFS2.

There are 3 lock levels Ocfs2 uses when protecting shared resources.

“NL” aka “No Lock” this is used as a placeholder. Either we get it so that we
can convert the lock to something useful, or we already had some higher level
lock and dropper to NL so another node can continue. This lock level does not
block any other nodes from access to the resource.

“PR” aka “Protected Read”. This is used to that multiple nodes might read the
resource at the same time without any mutual exclusion. This level blocks only
those nodes which want to make changes to the resource (EX locks).

“EX” aka “Exclusive”. This is used to keep other nodes from reading or changing
a resource while it is being changed by the current node. This level blocks PR
locks and other EX locks.

When another node wants a level of access to a resource which the current node
is blocking due to it’s lock level, that node “downconverts” the lock to a
compatible level. Sometimes we might have multiple nodes trying to gain
exclusive access to a resource at the same time (say two nodes want to go from
PR -> EX). When that happens, only one node can win and the others are sent
signals to ‘cancel’ their lock request and if need be, ‘downconvert’ to a mode
which is compatible with what’s being requested. In the previous example, that
means one of the nodes would cancel it’s attempt to go from PR->EX and
afterwards it would drop it’s PR to NL since the PR lock blocks the other node
from an EX.

After read the above text, I talked with Mark in IRC,  here is the edited (remove unnecessary part) conversation log,

coly: it’is an excellent material for DLM lock levels of ocfs2!
mark: specially if that helps folks understand what’s happening in dlmglue.c
* mark knows that code can be…. hard to follow  ;)
mark: another thing you might want to take note of – this whole “cancel convert” business is there because the dlm allows a process to retain it’s current lock level while asking for an escalation
coly: one thing I am not clear is, what’s the functionality of dlmglue.c ? like the name, glue ?
mark: if you think about it – being forced to drop the lock and re-acquire would eliminate the possibility of deadlock, at the expense of performance
mark: think of dlmglue.c as the layer of code which abstracts away the dlm interface for the fs
mark: as part of that abstraction, file system lock management is wholly contained within dlmglue.c
coly: only dlmglue.c acts as a abstract layer ?  and the real job is done by fsdlm or o2dlm ?
mark: yes
mark: dlmglue is never actually creating resources itself – it’s asking the dlm on behalf of the file system
mark: aside from code cleanliness, dlmglue provides a number of features the fs needs that the dlm (rightfully) does not provide
coly: which kind of ?
mark: lock caching for example – you’ll notice that we keep counts on the locks in dlmglue
mark: also, whatever fs specific actions might be needed as part of a lock transition are initiated from dlmglue. an example of that would be checkpointing inode changes before allowing other nodes access, etc
coly: yeah, that’s one more thing confusing me.
coly:  It’s not clear to me yet, for the conception of upconvert and downconvert
coly: when it combined with ast and bast
coly: have you checked out the “dlmbook” pdf? it explains the dlm api (which once you understand, makes dlmglue a lot easier to figure out)
coly: yes, I read it. but because I didn’t know ast and bast before, I don’t have conception on what happens in ast and bast
coly: is it something like the signal handler ?
mark: ast and bast though are just callbacks we pass to the dlm. one (ast) is used to tell fs that a request is complete, the other (bast) is used to tell fs that a lock is blocking progress from another node
coly: when an ast is triggered, what will happen ? the node received the ast can make sure the requested lock level is granted ?
mark: generally yes. the procedure is: dlmglue fires off a request… some time later, the ast callback is run and the status it passes to dlmglue indicates whether the operation succeeded
coly: if a node receives a bast, what will happen ? I mean, are there options (e.g. release its lock, or ignore the bast) ?
mark: release the lock once possible
mark: that’s the only action that doesn’t lockup the cluster  ;)
coly: I see, once a node receives a bast, it should try best to downconvert the coresponded lock to NL.
coly: it’s a little bit clear to me :-)

I recite the log other than my own understanding, it can be helpful to get the basic conception of OCFS2′s dlm levels and what ast and bast do.

Older Posts »

Powered by WordPress