1 00:00:00,000 --> 00:00:12,840 Hi, I'm Zoltan Laszlo Nemeth from the University of Szeged, and in this talk I would like to present a short overview of memory corruption attacks and defenses. 2 00:00:16,040 --> 00:00:27,080 I'm a mathematician working as a lecturer at the University of Szeged who is interested in information security in general and especially about ethical hacking. 3 00:00:27,100 --> 00:00:38,260 I also feel responsible for security in general and especially for security education, and I'm also affiliated to the hackerspace Szeged. 4 00:00:38,260 --> 00:00:47,820 Thus, according to the principle of hackerspaces, while I'm mainly still learning, I have just started to make and share something. 5 00:00:49,100 --> 00:00:56,020 In this talk, I will briefly discuss the problem of low-level programming languages. 6 00:00:56,020 --> 00:01:05,180 Then I will present a general model of memory corruption attacks taken from a recent survey article. 7 00:01:05,300 --> 00:01:16,480 Afterwards, I will talk about some currently deployed protections, some advanced attacks and defense methods with a short demo. 8 00:01:16,480 --> 00:01:24,720 Then I will mention some latest defense proposals, and my talk will end by drawing some conclusions. 9 00:01:28,850 --> 00:01:31,670 Basically, the problem is very simple. 10 00:01:31,670 --> 00:01:34,530 Low-level programming languages are unsafe. 11 00:01:35,570 --> 00:01:49,210 You know C and its variants, C++, C Sharp, and Object C, are used extensively in operating system kernels, system drivers, and embedded systems, just to name a few. 12 00:01:50,550 --> 00:01:56,430 I was told that the C language is the language of the programmer profession. 13 00:01:57,050 --> 00:02:04,970 Moreover, we have millions of lines of legacy C code, but the problem is that C is type-unsafe. 14 00:02:04,970 --> 00:02:15,570 And what's more, both check-in and dynamic memory management are solely the responsibility of the programmer, so C is memory-unsafe. 15 00:02:15,570 --> 00:02:20,450 Hence, it is not surprising that it is error-prone as well. 16 00:02:20,450 --> 00:02:38,270 And indeed, we can enumerate the series of vulnerabilities from the Morse form to Heartbleed, or just think of the Ponton contest, where even the most hardened versions of operating systems and browsers are exploited in every year. 17 00:02:41,270 --> 00:02:44,570 Nevertheless, C is still very popular. 18 00:02:44,830 --> 00:02:50,610 Here you can see the ranking of the most popular programming languages by the IEEE. 19 00:02:51,230 --> 00:02:53,410 Here the icons... 20 00:03:05,830 --> 00:03:26,810 So here the icons means web application, mobile application, languages used for enterprise, desktop, and scientific applications, and for embedded systems, respectively. 21 00:03:29,170 --> 00:03:35,450 It is especially important to fight against memory corruption bugs, but it is difficult. 22 00:03:35,830 --> 00:03:44,570 It is a great challenge both for academia and industry, but it is still considered to be answerable for more than three decades. 23 00:03:44,750 --> 00:03:47,750 We have seen a huge number of proposals. 24 00:03:47,750 --> 00:03:54,130 We mentioned a few, but only a few mitigation methods are deployed in practice today. 25 00:03:54,290 --> 00:04:05,510 And yes, they are still insufficient to stop real-world attacks, as the situation naturally raises the question of why this is so. 26 00:04:07,510 --> 00:04:26,830 I like this figure very much, as it tells us that we should not forget about other aspects, namely usability and functionality in favor of security, but we must make a balance among these sometimes conflicting goals. 27 00:04:27,870 --> 00:04:46,290 If I try to translate or adapt this to the area of runtime attack mitigation techniques, then security becomes robustness, that is the type of attacks that can be prevented by the defense, and how effective the method is. 28 00:04:46,290 --> 00:04:50,550 The aspects of ease of use changes to performance. 29 00:04:50,990 --> 00:05:03,630 It is very important, as studies show, that no mitigation technique with more than 5-10% of runtime overhead can have widespread adaptation in practice. 30 00:05:04,050 --> 00:05:28,850 And perhaps functionality should be replaced by compatibility, which can be either binary compatibility, meaning that the defended modules still comply with unmodified libraries, and if it is not possible, we require source compatibility, meaning that no manual annotation of the source code is necessary. 31 00:05:29,750 --> 00:05:41,790 But besides this, modularity, that is a possibility to introduce the same defense for different modules independently from each other, is often an important issue. 32 00:05:44,420 --> 00:05:50,200 Now, let me present a model of memory corruption attacks. 33 00:05:51,390 --> 00:05:59,600 This is taken from an excellent survey article by Sekeres, Peyarvey, and Song. 34 00:05:59,600 --> 00:06:25,820 On the bottom line here, you can see the goal of the attacker, which may be code corruption or control flow hijack, data-only attack, or a memory leak. 35 00:06:26,420 --> 00:06:30,840 And the steps of the attacks are shown above. 36 00:06:32,240 --> 00:06:33,740 Let's see. 37 00:06:36,340 --> 00:06:46,500 So, in this model, in the first phase of the attacker, in the first phase or first step, the attacker must make a pointer invalid. 38 00:06:47,220 --> 00:06:55,220 Memory corruption bug in the program may cause a pointer either go out of bounds or become dangling. 39 00:06:55,720 --> 00:07:02,520 In the first case, we have a spatial error, and in the second case, we have a temporal error. 40 00:07:03,100 --> 00:07:16,860 In both cases, the next step is to dereference the corrupted pointer, which can be performed by either a write or free or by a read operation. 41 00:07:21,660 --> 00:07:45,420 For instance, a spatial pointer error might be caused by a classic buffer overflow underflow or by allocation of failure generating a null pointer dereference that might be exploitable in kernel space, and by indexing bugs, integer overflows, truncation and sadness bugs, 42 00:07:45,420 --> 00:07:49,080 and by incorrect pointer casting. 43 00:07:50,220 --> 00:08:11,840 Temporal pointer errors are also called use-after-free bugs, because there, a dangling pointer is dereferenced, that is used after the memory area it points to has been deallocated, that is freed by the memory management system during a free instruction. 44 00:08:12,380 --> 00:08:26,100 But know that a pointer to a local variable can also become dangling when it is assigned to a global pointer, and the subroutine returns, freeing the local variable from the stack. 45 00:08:26,240 --> 00:08:32,440 Moreover, double frees are also typical memory errors. 46 00:08:34,860 --> 00:08:39,740 In the second step, the attacker dereferences the corrupted pointer. 47 00:08:39,740 --> 00:08:45,700 He or she can use it either to write, to free, or to read. 48 00:08:46,080 --> 00:09:08,600 Thus he can, in the first case, override the return address on the stack, or override a function pointer in a vtable, or override the length field of a string object, or by freeing, he can override a heap metadata, or override a function pointer of an object on the heap. 49 00:09:08,600 --> 00:09:25,520 But know that even just by reading, the value of a corrupted pointer can be used to write malicious data that cause further corruption, or reading a corrupted pointer may lead to execution of malicious code. 50 00:09:29,420 --> 00:09:40,540 Next, the goal of the memory corruption of the first two steps may be quite different. 51 00:09:40,760 --> 00:09:55,020 The attacker can either modify another data pointer, and repeat the phase one and phase two with this newly corrupted pointer. 52 00:09:56,380 --> 00:09:58,560 Oh, sorry. 53 00:10:03,350 --> 00:10:06,130 Or he can modify, 54 00:10:09,700 --> 00:10:22,220 or he can modify code to this attacker-specified code called shellcode, and achieve a code corruption attack. 55 00:10:22,900 --> 00:10:43,320 Next, he can corrupt a pointer either to an address of his injected shellcode, or to an address of a code snippet of other memory modules called gadgets, and later by an indirect call, indirect jump, or return instruction, the attacker can hijack the control flow. 56 00:10:43,900 --> 00:10:46,800 This is the third type of attack. 57 00:10:46,800 --> 00:11:07,740 And please note that it is also possible to just modify a data variable, and if it is a sensitive data variable, that the attacker can execute a data-only attack. 58 00:11:07,740 --> 00:11:20,480 And finally, by corrupting an output variable, like here, for instance, the length field of a string, information can be leaked. 59 00:11:21,300 --> 00:11:23,640 So there are several possibilities. 60 00:11:24,700 --> 00:11:31,760 Next, let us briefly discuss currently deployed runtime protection. 61 00:11:32,380 --> 00:11:42,420 First, we have stack cookies, also called canaries, which are random values between the local buffers and the return addresses on the stack. 62 00:11:42,420 --> 00:11:58,440 They can be used to detect only continuous stack-based buffer overflows, since when the buffer overflows, the canary value is overwritten as well, which can be detected later when the function returns. 63 00:11:59,020 --> 00:12:13,500 In Windows, protection called safeSheck and SheckHop, structured accession handler overwrite protection, may validate the integrity of the accession handler pointers and the accession chain. 64 00:12:13,500 --> 00:12:21,240 The main problem with these techniques is that these are only partial solutions. 65 00:12:21,240 --> 00:12:40,050 That is, they only provide some very specific type of control flow integrity, and stack cookies can be bypassed by direct overwrites, for instance, using indexing errors, or they can be circumvented by memory leaks. 66 00:12:41,660 --> 00:12:49,650 The next main widely deployed mitigation technique is the write-sort-execute policy. 67 00:12:49,650 --> 00:12:54,070 Please remember that it has two equally important sides. 68 00:12:54,070 --> 00:13:03,210 The first is called non-executable data, or data execution prevention, abbreviated as DAP in the Windows terminology. 69 00:13:04,090 --> 00:13:10,110 And the second might be called non-writable code or code integrity. 70 00:13:10,570 --> 00:13:16,610 Nowadays, these are implemented by memory page protection in modern processors. 71 00:13:17,190 --> 00:13:29,190 Put in simple terms, we must have write or read permission for data, but never execute writes. 72 00:13:29,190 --> 00:13:38,930 And similarly, we must have execute or read permission for code pages, but never write at the same time. 73 00:13:39,430 --> 00:13:51,150 Thus, as in many other areas of security, the clear separation of data and code is crucial here as well. 74 00:13:52,650 --> 00:14:18,530 While this policy works well and it enjoys hardware support, so it has a negligible overhead, but it just protects against code injection attacks and is no use against the more sophisticated code reuse attacks, like return to libc, return-oriented programming, 75 00:14:18,530 --> 00:14:22,190 or jump-oriented programming. 76 00:14:24,030 --> 00:14:33,090 It also has some other issues, because separation of data and code is not as easy as it first seems. 77 00:14:34,010 --> 00:14:51,730 We must handle the problem with just-in-time compilation of script languages, where, for instance, for a browser, the scripts are user-supplied input data, but they must be turned into executable code. 78 00:14:51,870 --> 00:14:56,790 Also, there are self-modifying code, like Packers. 79 00:14:58,830 --> 00:15:05,930 Thus, that forces attackers to apply code reuse instead of code injection. 80 00:15:06,530 --> 00:15:18,810 And today, the main defense against code reuse attacks is address-based randomization, especially address-based layout randomization, abbreviated as ASLR. 81 00:15:19,870 --> 00:15:28,370 It simply randomizes the base address of the stacks, heaps, and the main executables and shared libraries. 82 00:15:28,710 --> 00:15:39,390 It works all right by concealing the address of the possible gadgets from the attacker, but it also has some weakness. 83 00:15:40,070 --> 00:16:02,470 First, it requires a position-independent executable that has a 10% runtime overhead, or on a 32-bit platform, the entropy of the usual ASLR implementation is rather weak, which is subject to brute-force attacks. 84 00:16:02,470 --> 00:16:09,230 Moreover, partial address overrides and memory leaks can defeat ASLR as well. 85 00:16:09,610 --> 00:16:24,190 And similar to DEP, ASLR is also an all-or-nothing type defense, meaning that just one non-ASLR module can break the whole protection totally. 86 00:16:25,510 --> 00:16:31,830 Now, let us look at some more advanced defense techniques. 87 00:16:34,570 --> 00:16:42,150 As I mentioned earlier, in the standard ASLR, only the base addresses of the code modules are randomized. 88 00:16:42,290 --> 00:16:52,950 To increase the entropy of the code locations, methods achieving slow, so-called fine-grade randomization were proposed. 89 00:16:53,410 --> 00:17:00,950 The aim here is that a single leaked pointer should not defeat the whole defense. 90 00:17:01,110 --> 00:17:04,930 The randomization can be applied at the following levels. 91 00:17:05,150 --> 00:17:19,860 In a module, we can permute all functions, or just basic blocks, that is, parts with a single entry and exit point, or we can apply instruction-level randomization. 92 00:17:21,670 --> 00:17:37,950 Of course, this makes ROP attacks harder, but adds little protection to the standard ASLR against return-to-lib attacks or other ROP attacks which require just a single address. 93 00:17:37,950 --> 00:17:46,950 Furthermore, with repeated memory leaks, ROP attacks are still possible, as we shall see. 94 00:17:53,410 --> 00:18:01,230 Here you can see how a new type of attack called just-in-time ROP attack works. 95 00:18:02,550 --> 00:18:21,030 The main assumption here is that, besides a vulnerability that allows ROP attacks, the attacker also has a memory-disclosure vulnerability which allows him to read from arbitrary memory address in a process-virtual memory. 96 00:18:21,490 --> 00:18:30,510 But as you probably know, trying to read from an unallocated address causes segmentation fault, and the application usually crashes. 97 00:18:31,750 --> 00:18:46,050 The attacker also needs a single valid runtime memory address, but of course it is not hard to obtain, and the important thing here is that it is also sufficient for a real-world attack. 98 00:18:47,270 --> 00:18:53,330 Here, let this address be the address of function A 99 00:18:56,530 --> 00:18:59,350 , and 100 00:19:02,430 --> 00:19:17,570 using the above-mentioned memory leak, the attacker can read and later disassemble the whole memory page that contains function A, here denoted by page 0. 101 00:19:18,490 --> 00:19:24,550 And the disassembly will certainly reveal other valid memory addresses as well. 102 00:19:24,550 --> 00:19:46,090 For instance, here the instruction call function B, reveals the address of the function B, of course, and another code page it contains in it. 103 00:19:46,610 --> 00:19:51,970 It is contained in code page 1 here. 104 00:19:53,670 --> 00:20:00,340 And by repeating this process, the attacker can gain a significant amount of disassembled pages. 105 00:20:00,340 --> 00:20:10,790 So just by going in cycles, he can obtain many disassembled code pages. 106 00:20:12,480 --> 00:20:27,110 And finally, with a runtime gadget finder and the GTROP compiler, he can generate a ROP payload that works no matter how fine-grained the ISLR is applied. 107 00:20:29,650 --> 00:20:53,430 Moreover, if we have a user-scripting environment, as in a browser or during a document-based attacks, then the whole process can be automated, has a common memory leak together with a ROP, and scripting can defeat even the strictest DAP plus ESLR implementation. 108 00:21:01,170 --> 00:21:08,910 ESLR being insufficient, the control flow integrity approach has gained a considerable interest recently. 109 00:21:09,330 --> 00:21:20,210 The main idea here is to prevent control flow hijack attacks by restricting the control transfer to a regular benign control flow of the application. 110 00:21:21,410 --> 00:21:35,970 Technically, it can be done first by computing the so-called control flow graph of the application, and then monitoring the executable's runtime behavior according to this graph. 111 00:21:36,390 --> 00:21:48,210 And any deviation from the standard control flow, like a return to an unusual address, indicates a control flow hijack, so the attack can be stopped. 112 00:21:48,790 --> 00:21:52,970 But this method has drawbacks as well. 113 00:21:52,970 --> 00:22:15,290 The main problem is that precise validation of all indirect control flow transfers, namely all indirect calls and all indirect jumps and all returns, moreover only allowing return to the original callers, introduces a rather high overhead, about 21%. 114 00:22:15,290 --> 00:22:22,370 Therefore, in practice, usually some relaxed versions are used instead. 115 00:22:22,370 --> 00:22:36,090 So-called coarse-grained control flow integrity policies are implemented, which, as we'll see soon, can be defeated by advanced drop attacks. 116 00:22:37,890 --> 00:22:48,850 So, a CFI, control flow integrity policy, depends on the type of indirect branches that are checked. 117 00:22:48,850 --> 00:22:54,070 They usually also use some behavior-based heuristics. 118 00:22:54,170 --> 00:23:05,250 For example, a frequent use of return to a small code snippet may indicate a drop attack, and the time of check can be varied as well. 119 00:23:06,490 --> 00:23:15,370 In paper 4, the authors clearly demonstrated the weak points of the coarse-grained control flow integrity approach. 120 00:23:15,370 --> 00:23:31,810 They took these five representatives implementing this defense, namely CFI for coarse binaries, K-Bombshell, Ropecker, RobGuard, and Microsoft EMAT. 121 00:23:31,810 --> 00:23:56,770 And what they did, they derived a combined, most restrictive CFI policy from all these programs, and showed that even this combined policy can be bypassed using two new types of gadgets, which they called a core-red-pair gadget and a long-nop gadget. 122 00:24:00,430 --> 00:24:06,770 Now, as a concrete example, let us look at Microsoft EMAT. 123 00:24:06,850 --> 00:24:11,150 EMAT stands for Enhanced Mitigation Experience Toolkit. 124 00:24:11,150 --> 00:24:16,170 It is a free mitigation tool that can be downloaded from the web. 125 00:24:16,370 --> 00:24:24,030 The main goal of the EMAT project is to enhance modern protection to earlier versions of Windows. 126 00:24:24,050 --> 00:24:38,330 It is highly modular, and it has no less than 14 different mitigation options that can be turned on and off on a per-application basis, solving many incompatibility problems. 127 00:24:39,150 --> 00:24:45,710 But like most practical tools today, it is not valid proof. 128 00:24:45,710 --> 00:24:50,290 Even Microsoft warned us that it can be bypassed. 129 00:24:50,330 --> 00:24:54,910 The first main problem is that it is a user space protection. 130 00:24:54,910 --> 00:25:11,410 Has the attacker of the offensive security been able to turn off the defense of the application completely several times, or the mitigation can be circumvented one by one? 131 00:25:11,410 --> 00:25:14,870 Just to show you a short demo. 132 00:25:32,370 --> 00:25:47,930 So, for this demo, I took a test disk utility, which is a very nice tool from CG security. 133 00:25:47,930 --> 00:26:01,110 It's basically a small utility that can be used to recover lost partitions in partition tables. 134 00:26:01,990 --> 00:26:10,690 And it saved my life at least one time when I installed Windows over a Linux system. 135 00:26:11,010 --> 00:26:16,490 But the main problem is that this utility also accepts 136 00:26:20,110 --> 00:26:22,350 image file as input. 137 00:26:22,350 --> 00:26:32,970 And Dennis Andrzejkovic in last May found a buffer overflow, a simple buffer overflow vulnerability in this application. 138 00:26:33,230 --> 00:26:55,630 And what I did was that I exploited this vulnerability, which was easy, but I used it to turn on the mitigation technique of Emmet one by one, and I was able to bypass it finally. 139 00:26:56,030 --> 00:27:09,450 So, just to show you, an attacker can use a small script like this. 140 00:27:09,450 --> 00:27:12,890 Sorry, I do not have time to go into the details. 141 00:27:12,890 --> 00:27:20,010 I am willing to show you the whole code and how the exploits works after the talk. 142 00:27:20,410 --> 00:27:35,430 But the script is used just to generate, just to generate, yes, a file called GameOverExploitBin. 143 00:27:35,650 --> 00:27:45,830 And if I use it as input, then, okay, just one moment. 144 00:27:45,850 --> 00:27:50,030 I would like to show you that Emmet is in effect. 145 00:27:59,100 --> 00:28:21,190 So this is the graphical user face of the tool that you can see that all mitigations are green, turned on, and here we should see, yes, here. 146 00:28:21,190 --> 00:28:25,230 This is the test this utility here. 147 00:28:25,230 --> 00:28:38,610 And as I mentioned, it is very versatile, so I can turn all mitigation methods on and off by per-application basis. 148 00:28:42,050 --> 00:28:57,470 But if I put enter, then nothing happened, because the calculator, just believe me, appeared on my primary desktop. 149 00:28:57,470 --> 00:29:02,910 But I promise you to show you that it works after the talk. 150 00:29:02,910 --> 00:29:12,550 So it is just to demonstrate that even such runtime mitigation tool can be bypassed. 151 00:29:28,880 --> 00:29:33,620 The problem, I cannot return my presentation, 152 00:29:37,680 --> 00:29:40,460 and I do not know why. 153 00:29:42,500 --> 00:29:45,300 Sorry, I must restart it. 154 00:29:57,740 --> 00:30:00,860 Yes, so here we are. 155 00:30:04,840 --> 00:30:12,940 The latest CFI protection solution of Microsoft is called Control Flow Guard. 156 00:30:13,340 --> 00:30:22,300 It was introduced in the preview version of Windows 8.1, but it was disabled in the final edition. 157 00:30:22,420 --> 00:30:29,980 Later it was re-enabled in update 3, and now it can be found on Windows 10. 158 00:30:29,980 --> 00:30:48,780 But it is also a partial and therefore imperfect implementation of control flow integrity, namely because of performance reasons, and only injects checks before indirect calls, hence returns are left unprotected. 159 00:30:48,780 --> 00:31:04,180 Hence it is effective against VT overrides, but requires compiler and linker support, and third-party modules and even old versions of my MS binaries remain unprotected. 160 00:31:04,400 --> 00:31:12,360 And note that even a universal bypass of CFG was demonstrated recently. 161 00:31:14,540 --> 00:31:24,780 Now let me mention some of the recent defense proposals that have arisen recently in academic research. 162 00:31:24,780 --> 00:31:29,520 As far as I know, they are still in prototype phase. 163 00:31:32,420 --> 00:31:46,800 First, the concept of code pointer integrity is an alternative to control flow integrity and code randomization that can guarantee the integrity of all code pointer. 164 00:31:46,800 --> 00:31:51,200 It was formally proven to be correct. 165 00:31:51,200 --> 00:32:02,920 This can be done by separating sensitive control flow data like return address and jump target in a safe, protected region. 166 00:32:02,920 --> 00:32:08,780 You know separation by isolation is one of the basic principles of security. 167 00:32:08,780 --> 00:32:17,000 It also has a relaxed version called code pointer separation, possessing an even smaller overhead. 168 00:32:18,860 --> 00:32:35,380 The main idea of the tool called Redactor is to implement execute but not read policy in order to do code pointer hiding against the pointer harvesting phase of a gtrop attack. 169 00:32:35,940 --> 00:32:50,960 But it needs hardware support for this, which is called hardware accelerated paging, but Redactor also requires OS and kernel extensions as well. 170 00:32:52,880 --> 00:33:00,040 Another very interesting proposal called Isomeron follows a totally different strategy. 171 00:33:00,040 --> 00:33:04,660 Instead of preventing, it tolerates memory disclosures. 172 00:33:04,660 --> 00:33:08,860 To achieve this goal, it randomizes control flow transfers. 173 00:33:09,100 --> 00:33:19,820 For this, it needs two clones called isomers, and it needs isomers of all functions in the memory. 174 00:33:20,180 --> 00:33:29,320 Of course, this roughly doubles the memory requirements for a code, but you know, space is far less an issue than time overhead. 175 00:33:29,880 --> 00:33:31,700 It works as follows. 176 00:33:31,700 --> 00:33:44,840 On each call and return instruction, isomeron randomly determines whether to switch to the other isomer or keep the execution in the current one. 177 00:33:44,840 --> 00:33:51,900 These non-deterministic returns and calls make gtrop attacks impossible. 178 00:33:51,920 --> 00:34:02,400 The outdoors experiment shows that the tool has acceptable time overhead, and it can be integrated into a compiler. 179 00:34:03,020 --> 00:34:07,040 And the last tool we will see is called Hafix. 180 00:34:07,040 --> 00:34:16,140 It is a hardware assisted control flow integrity solution that is significantly more efficient than existing software solutions. 181 00:34:16,140 --> 00:34:27,000 It uses a shadow stack to enforce the intended control flow more precisely than coarse-grained CFI tool deployed today. 182 00:34:27,000 --> 00:34:40,340 For instance, it enforces returns to target not just any co-preceded instruction, but only those that are in function that is currently being executed. 183 00:34:41,220 --> 00:34:45,940 This can be done by three new processor instructions. 184 00:34:48,420 --> 00:34:57,300 As a conclusion, we may say that the problem of R-CFC code is far from being solved at the moment. 185 00:34:57,520 --> 00:35:08,800 It should be obvious that there is no silver bullet, as securing a large amount of R-CF legacy code is easy and will remain hard. 186 00:35:09,440 --> 00:35:32,360 It also seems that while the academic world looks for perfect solutions and is concerned less about real-life realization of the mitigation proposals, the business sector is rather interested in very fast mitigation against current mainstream exploits and is less concerned about possible bypasses. 187 00:35:32,420 --> 00:35:49,250 Of course, inefficient and incompatible solutions are useless in practice, but even a small trade-off for efficiency can totally destroy the defense, as we have seen in the case of coarse-grained control flow integrity. 188 00:35:51,250 --> 00:36:04,020 It is also clear that there is an information warfare between attacks and defenses, since known attacks can be mitigated as known defenses can be bypassed. 189 00:36:04,040 --> 00:36:18,040 The possible solution can be at different levels, though we mainly discuss software-based techniques here, as hardware-based solutions usually need a considerable time to spread. 190 00:36:18,460 --> 00:36:29,840 But many software approaches have the performance overhead as the main bottleneck, and hardware support can bring a relief to this problem. 191 00:36:29,960 --> 00:36:39,780 Lastly, even though it seems unrealistic, perhaps one day we will use type-safe languages instead of C. 192 00:36:39,980 --> 00:36:55,430 Anyway, as we have seen, runtime mitigations are far from being perfect, thus other layers of defenses like secure coding, security testing, and sandboxing should not be neglected. 193 00:36:56,940 --> 00:36:59,540 Okay, that's all I wanted to say. 194 00:36:59,540 --> 00:37:00,860 Thank you for your attention.