Reservations are now being accepted for commercial vendors, AAUW members and AAUW branches that wish to sell merchandise as a fundraiser to benefit AAUW projects at the AAUW California 2018 Convention Marketplace. Application Deadline – March 7, 2018.
Applications will be processed on a first come, first served basis. To assure we have a variety of items to choose from, we’ll limit the number of vendors within the same category. Only one table per branch will be permitted unless there are tables remaining on April 12, 2018. No refunds after April 11, 2018. Marketplace Location The marketplace will be located within the convention event Irvine Marriott Hotel 18000 Von Karman Avenue Irvine, CA 92612 Marketplace Hours. Saturday, April 28, 2018, from 8:00 am.
Setup 7:00 am. Pricing (A full table is 6 feet long.) AAUW Branches (All branch sales proceeds benefit AAUW projects) $50 for a half table $100 for a full table AAUW Member (For commercial benefit) $175 per table plus 10% of total sales Non-AAUW (Must show proof of resale license) $250 per table plus 10% of total sales Contact For more information contact Tina Byrne or Kathleen Doty at or by phone at 714-401-9781 (Tina) or 626-284-0763 (Kathleen).
Description The best ebook reader platform on the market: user friendly, powerful, fast, with synchronized library, this book reader provides unprecedented flexibility, speed and reading comfort. Highlight excerpts, take notes, organize your books, add your favorite bookstores and much more!
Mi Chat Es Mucho Mas Que Un Chat
OVERVIEW - Synchronized bookshelves: Switch between your phone and tablet and always find your books, reading positions, collections, tags, ratings, bookmarks, in their most recent state. Mantano Reader is now Bookari! New:. Backup and synchronization are now available for FREE (with usage quotas)!.
You can now use your Google or Facebook account to sign in to the Cloud. Capture web pages for offline reading using our companion extension for Google Chrome (paid Cloud accounts only) Sync improvements:. Automatic synchronization of your books when closing or opening the app. Fixed highlights synchronization with Android Reader:. Global book progression and local storage for EPUB 3.
Show current chapter and jump to previous and next ones. Fixed a few bugs with EPUB Library:. Auto-completion of tags and authors. Search through your Calibre's OPDS catalog. 1.2 14 Dec 2015.
A PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel 5 running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM.
(If 2MB hugepages are available to the guest, the initialization should be much faster, but that hasn't been tested.). Implicit caching occurs when a memory element is made potentially cacheable, although the element may never have been accessed in the normal von Neumann sequence. Implicit caching occurs on the P6 and more recent processor families due to aggressive prefetching, branch prediction, and TLB miss handling. Implicit caching is an extension of the behavior of existing Intel386, Intel486, and Pentium processor systems, since software running on these processor families also has not been able to deterministically predict the behavior of instruction prefetch. After the execution has been returned to the non-speculative path because the processor has noticed that untrustedoffsetfromcaller is bigger than arr1-length, the cache line containing arr2-dataindex2 stays in the L1 cache.
By measuring the time required to load arr2-data0x200 and arr2-data0x300, an attacker can then determine whether the value of index2 during speculative execution was 0x200 or 0x300 - which discloses whether arr1-data untrustedoffsetfromcaller &1 is 0 or 1. To be able to actually use this behavior for an attack, an attacker needs to be able to cause the execution of such a vulnerable code pattern in the targeted context with an out-of-bounds index. For this, the vulnerable code pattern must either be present in existing code, or there must be an interpreter or JIT engine that can be used to generate the vulnerable code pattern. So far, we have not actually identified any existing, exploitable instances of the vulnerable code pattern; the PoC for leaking kernel memory using variant 1 uses the eBPF interpreter or the eBPF JIT engine, which are built into the kernel and accessible to normal users. Additionally, at least on the Intel machine on which this was tested, bouncing modified cache lines between cores is slow, apparently because the MESI protocol is used for cache coherence 8. Changing the reference counter of an eBPF array on one physical CPU core causes the cache line containing the reference counter to be bounced over to that CPU core, making reads of the reference counter on all other CPU cores slow until the changed reference counter has been written back to memory. Because the length and the reference counter of an eBPF array are stored in the same cache line, this also means that changing the reference counter on one physical CPU core causes reads of the eBPF array's length to be slow on other physical CPU cores (intentional false sharing).
The attack uses two eBPF programs. The first one tail-calls through a page-aligned eBPF function pointer array progmap at a configurable index.
In simplified terms, this program is used to determine the address of progmap by guessing the offset from progmap to a userspace address and tail-calling through progmap at the guessed offsets. To cause the branch prediction to predict that the offset is below the length of progmap, tail calls to an in-bounds index are performed in between. To increase the mis-speculation window, the cache line containing the length of progmap is bounced to another core.
To test whether an offset guess was successful, it can be tested whether the userspace address has been loaded into the cache. This program can then be used to leak memory by repeatedly calling the eBPF program with an out-of-bounds offset into victimmap that specifies the data to leak and an out-of-bounds offset into progmap that causes progmap + offset to point to a userspace memory area. Misleading the branch prediction and bouncing the cache lines works the same way as for the first eBPF program, except that now, the cache line holding the length of victimmap must also be bounced to another core.
Variant 2: Branch target injection. Prior research (see the Literature section at the end) has shown that it is possible for code in separate security contexts to influence each other's branch prediction. So far, this has only been used to infer information about where code is located (in other words, to create interference from the victim to the attacker); however, the basic hypothesis of this attack variant is that it can also be used to redirect execution of code in the victim context (in other words, to create interference from the attacker to the victim; the other way around). The basic idea for the attack is to target victim code that contains an indirect branch whose target address is loaded from memory and flush the cache line containing the target address out to main memory.
Then, when the CPU reaches the indirect branch, it won't know the true destination of the jump, and it won't be able to calculate the true destination until it has finished loading the cache line back into the CPU, which takes a few hundred cycles. Therefore, there is a time window of typically over 100 cycles in which the CPU will speculatively execute instructions based on branch prediction. Haswell branch prediction internals. The generic branch predictor, as documented in prior research, only uses the lower 31 bits of the address of the last byte of the source instruction for its prediction.
If, for example, a branch target buffer (BTB) entry exists for a jump from 0x4141.0004.1000 to 0x4141.0004.5123, the generic predictor will also use it to predict a jump from 0x4242.0004.1000. When the higher bits of the source address differ like this, the higher bits of the predicted destination change together with it—in this case, the predicted destination address will be 0x4242.0004.5123—so apparently this predictor doesn't store the full, absolute destination address.
In other words, if a source address is XORed with both numbers in a row of this table, the branch predictor will not be able to distinguish the resulting address from the original source address when performing a lookup. For example, the branch predictor is able to distinguish source addresses 0x100.0000 and 0x180.0000, and it can also distinguish source addresses 0x100.0000 and 0x180.8000, but it can't distinguish source addresses 0x100.0000 and 0x140.2000 or source addresses 0x100.0000 and 0x180.4000. In the following, this will be referred to as aliased source addresses. The BHB is interesting for two reasons.
First, knowledge about its approximate behavior is required in order to be able to accurately cause collisions in the indirect call predictor. But it also permits dumping out the BHB state at any repeatable program state at which the attacker can execute code - for example, when attacking a hypervisor, directly after a hypercall. The dumped BHB state can then be used to fingerprint the hypervisor or, if the attacker has access to the hypervisor binary, to determine the low 20 bits of the hypervisor load address (in the case of KVM: the low 20 bits of the load address of kvm-intel.ko).
Reverse-Engineering Branch Predictor Internals. Based on the assumption that branch predictor state is shared between hyperthreads 10, we wrote a program of which two instances are each pinned to one of the two logical processors running on a specific physical core, where one instance attempts to perform branch injections while the other measures how often branch injections are successful. Both instances were executed with ASLR disabled and had the same code at the same addresses. The injecting process performed indirect calls to a function that accesses a (per-process) test variable; the measuring process performed indirect calls to a function that tests, based on timing, whether the per-process test variable is cached, and then evicts it using CLFLUSH. Both indirect calls were performed through the same callsite. Before each indirect call, the function pointer stored in memory was flushed out to main memory using CLFLUSH to widen the speculation time window. Additionally, because of the reference to 'recent program behavior' in Intel's optimization manual, a bunch of conditional branches that are always taken were inserted in front of the indirect call.
However, a history buffer needs to 'forget' about past branches after a certain number of new branches have been taken in order to be useful for branch prediction. Therefore, when new data is mixed into the history buffer, this can not cause information in bits that are already present in the history buffer to propagate downwards - and given that, upwards combination of information probably wouldn't be very useful either. Given that branch prediction also must be very fast, we concluded that it is likely that the update function of the history buffer left-shifts the old history buffer, then XORs in the new state (see diagram). With 32 static jumps in between, no bit flips seemed to have an influence, so we decreased the number of static jumps until a difference was observable. The result with 28 always-taken jumps in between was that bits 0x1 and 0x2 of the target and bits 0x40 and 0x80 of the source had such an influence; but flipping both 0x1 in the target and 0x40 in the source or 0x2 in the target and 0x80 in the source did not permit disambiguation.
This shows that the per-insertion shift of the history buffer is 2 bits and shows which data is stored in the least significant bits of the history buffer. We then repeated this with decreased amounts of fixed jumps after the bit-flipped jump to determine which information is stored in the remaining bits. Reading host memory from a KVM guest Locating the host kernel. To find the right address for the source or one of its aliasing addresses, code that loads data through a specific register is placed at all possible call targets (the leaked low 20 bits of kvm-intel.ko plus the in-module offset of the call target plus a multiple of 2 20 ) and indirect calls are placed at all possible call sources. Then, alternatingly, hypercalls are performed and indirect calls are performed through the different possible non-aliasing call sources, with randomized history buffer state that prevents the specialized prediction from working.
After this step, there are 2 16 remaining possibilities for the load address of kvm.ko. The PoC assumes that the VM does not have access to hugepages.To discover eviction sets for all L3 cache sets with a specific alignment relative to a 4KiB page boundary, the PoC first allocates 25600 pages of memory. Then, in a loop, it selects random subsets of all remaining unsorted pages such that the expected number of sets for which an eviction set is contained in the subset is 1, reduces each subset down to an eviction set by repeatedly accessing its cache lines and testing whether the cache lines are always cached (in which case they're probably not part of an eviction set) and attempts to use the new eviction set to evict all remaining unsorted cache lines to determine whether they are in the same cache set 12. Locating the host-virtual address of a guest page. The host kernel maps (nearly?) all physical memory in the physmap area, including memory assigned to KVM guests. However, the location of the physmap is randomized (with a 1GiB alignment), in an area of size 128PiB.
Therefore, directly bruteforcing the host-virtual address of a guest page would take a long time. It is not necessarily impossible; as a ballpark estimate, it should be possible within a day or so, maybe less, assuming 12000 successful injections per second and 30 guest pages that are tested in parallel; but not as impressive as doing it in a few minutes. At this point, it would normally be necessary to locate gadgets in the host kernel code that can be used to actually leak data by reading from an attacker-controlled location, shifting and masking the result appropriately and then using the result of that as offset to an attacker-controlled address for a load. But piecing gadgets together and figuring out which ones work in a speculation context seems annoying.
Hp hewlett packard 11311 chinden blvd driver. Reference: Z0037 Active No General General Mailing Address Toby Sanchez 2351 HP Way Rio Rancho, NM 87144 US Email: [email protected] Phone: (877)235-4502 FAX: (800)825-2329 Website: Alt. Address Address Information Name Address Type Address Information Status Default for Type Bid Bid Mailing Address Toby Sanchez 2351 HP Way Rio Rancho, NM 87144 US Email: [email protected] Phone: (877)235-4502 Alt. Reference: Z0001 Active Yes PO Purchase Order Mailing Address HP Inc. 14231 Tandem Blvd.
So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets. In summary, an attack using this variant of the issue attempts to read kernel memory from userspace without misdirecting the control flow of kernel code. This works by using the code pattern that was used for the previous variants, but in userspace. The underlying idea is that the permission check for accessing an address might not be on the critical path for reading data from memory to a register, where the permission check could have significant performance impact.
Instead, the memory read could make the result of the read available to following instructions immediately and only perform the permission check asynchronously, setting a flag in the reorder buffer that causes an exception to be raised if the permission check fails. Intel is committed to improving the overall security of computer systems. The methods described here rely on common properties of modern microprocessors. Thus, susceptibility to these methods is not limited to Intel processors, nor does it mean that a processor is working outside its intended functional specification. Intel is working closely with our ecosystem partners, as well as with other silicon vendors whose processors are affected, to design and distribute both software and hardware mitigations for these methods. For more information and links to useful resources, visit: AMD. AFAIK this is not a remote exploitable bug.
You have to be able to run userspace (unprivileged) code on the machine. On the bad side, it seems that from inside a VM (eg. In a cloud service) you can read others VMs' memory. Anyway, all the hype mixing all processors together, mixing those 3 variants together, is just wrong. Only meltdown (the Intel-specific one, variant 3), can be used to.accurately. read memory. The other variants can only do a fair guess of the memory contents timing the cache access.
Never mind the future goodwill and trust impact on this one. The Pentium calculus bug was pretty bad for them, but this makes everything else pale in comparison, and if the performance losses are as big as an average of 20%, then someone will demand quite some comp for that. The spectre vulnerability seems to be more of a generic one, while bad, not as easy to actually do on a generic system where you don't know the current setup. As for mitigating that one, it looks like you may have to make cache lines private to the process, rather than a shared general global space.
That AMD 'is vulnerable' should NOT be believed for a single solitary nanosecond without empirical evidence of exploitation on AMD platforms. I could be wrong, but this WREAKS of Intel damage-control.
That is to say, the 'AMD is exploitable' line may be an attempt to exploit human psychology in order to mitigate consumer-flight from Intel CPU's toward AMD's offering. Upon reading about the Intel bug and how AMD was not vulnerable (at first), the involuntary thought that immediately entered my own mind was 'my next CPU is gonna be AMD. So what I'm saying is that in order to curb this rational impulse in consumers (given the severity of the bug), there MAY be a 'push' to vilify AMD's chips to instill the feeling of 'oh well, if they both have serious bugs, I may as well stay with what I'm using on my next purchase'. A PoC that demonstrates the basic principles behind variant 1 in userspace on the tested Intel Haswell Xeon CPU, the AMD FX CPU, the AMD PRO CPU and an ARM Cortex A57 2. This PoC only tests for the ability to read data inside mis-speculated execution within the same process, without crossing any privilege boundaries. A PoC for variant 1 that, when running with normal user privileges under a modern Linux kernel with a distro-standard config, can perform arbitrary reads in a 4GiB range 3 in kernel virtual memory on the Intel Haswell Xeon CPU.
If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU. On the Intel Haswell Xeon CPU, kernel virtual memory can be read at a rate of around 2000 bytes per second after around 4 seconds of startup time. It already exists. Hardware 1.3 'Hardware.We have empirically verified the vulnerability of several Intel processors to Spectre attacks, including Ivy Bridge, Haswell and Skylake based processors. We have also verified the attack’s applicability to AMD Ryzen CPUs. Finally, we have also successfully mounted Spectre attacks on several Samsung and Qualcomm processors (which use an ARM architecture)found in popular mobile phones.' They even use the word empirical.
My involuntary thought since I already have PCs I built on AMD is wondering if this vulnerability or the propose software OS update patch will even have any affect on my system. The test AMD CPU used in this PoC is a lot newer than the AMD chips I use on my two desktop systems ie.
AMD Opteron 185 and AMD Phenom II. My other thought is how would the attacker get administrative permission to even execute code exploit.
Don't the current permissions in place still apply to when it comes to defending against any exploit. Don't the ermission to read+execute need to happen first, and this blog post mentions nothing of any sort. Who knows what these control testers were doing to try legitimately hack a PC. With that in mind, the urgency for concern of this wans. Several AMD FX and at least one APU are affected.
And it is very easily exploited remotely. I have no statistics except my machines. Every Intel and AMD processor I have is attacked remotely almost every day. Not sure if I get attacked more than the average person (it seems to me I should be a boring target to either criminal or state attackers) or if there is widespread (psychological) denial. Many people who should be able to see and understand this stuff better than me still speak as if this is overblown. It's real, it's far worse than the most paranoid security scenarios ever imagined, and if you aren't under constant attack then you either don't realize you are being attacked, or your turn hasn't come yet. It certainly will; I am nobody and if I'm getting it this bad then I can only imagine that everyone is or will.
Perhaps it becomes harder to see with professional level training since professionals 'know' even better than anyone that security disaster this bad 'can't' happen. But, unfortunately it has happened. Every everything is darn near wide open and remotely exploitable. Still waiting for the running, screaming and hair on fire. That is where we should be. Time to disembark from our pleasure cruise on denial.
From reading GPZ, Spectre, Meltdown and Fogh, it appears that the side-execution attacks are possible on all modern CPUs, but have only been successfully exploited on Intel chips, quite possibly because of the designs they use which make them the most performant. AMD and ARM are harder to exploit because they lack the hardware which in part gives Intel its performance edge. Some of the attacks appear exploitable through JavaScript loadable through a webpage in Chrome. It's probably possible through Firefox, Edge, and Safari too.
It could read out your passwords. That's pretty devastating. Some also allow containerised software access to all the other containers on a physical host, including those on cloud infrastructure. Again, pretty devastating.
Google say that have mitigated that threat on their infrastructure (I wonder how?); have others? I note that the research exposing these attacks was in large part funded by the EU! I think the general claim is that even if AMD is not vulnerable to the specific attacks identified so far, their architecture implements similar features that may well be subject to similar exploits in future. So swapping Intel CPUs for AMD ones.today.
would indeed reduce your vulnerability, but switching to an AMD processor.next year. would not. On the one hand, the Intel-specific attack will have been mitigated in software in the meantime; and on the other, an AMD-specific attack may be discovered later.
The only reason to switch to AMD in the long term would be if they could demonstrate that their architecture design process is superior, such that exploits like this are significantly less likely. However, given the complexity of modern processors, it would be completely unsurprising if a new family of exploits was discovered that nobody knew to design against. Interesting, but not surprised. Since years I routinely deactivate all prefetching systems forcthe CPU in BIOS. At least on VM hosts.
I would like to know if such a system is still vulnerable. Why I did that? I observed severe performance hits with high virtualizazion ratios. I suspected pipeline trashing and or malallocation of fake HT cores. We had VMS getting scheduled on two fakes (HT cores). They performed bad, but also crippled the VM running on the two real cores. Disabling HT resulted in 2x faster response times.
Disabling PREFETCH (IP and Data) resulted in another 3x faster response. JIRA Kanban board (a large one) Systems that exhibited performance issues where disk was suspected were recovering and ran smoothly ever after without touching any outside system. (MSExchange) HPE has the hints to disable prefetch in their tuning guides for low latency systems. I tried replicating something along the lines of PoC 1 (in BASIC of all things), but couldn't get it to read the memory speculatively on an i7-860 from what I could see (it does reliably detect the cache misses when it is in proper code flow).
Coming here I see I might need to learn more about branch prediction, I barely know what I'm doing. But after seeing news of this earlier today and reading the papers I thought it sounds worryingly easy, so the fact I got this far starting from scratch in a little over 3 hours (and if it had speculatively executed the read, I would have detected it) is kind of scary. This is going to be interesting to see how this plays out in a month.
FDIV was bad, I was there but it was self inflicted by how Intel handled it. But this seems like it has way more legal, political and customer relations pathways and the fact that it impacts way more products and it seems like it's not limited to just Intel. As more and more people get their head around this type of attack methodology and apply it to more parts of the CPU and different CPUs, GPUs, etc more products will fall. So right now, batten down the hatches, we are going for a ride.
I used rdtscp in my test above. I just tried on a C2D which didn't like that, 'prefixing' rdtsc with cpuid worked. Couldn't confirm speculative accesses on the C2D either (see my post above) but I think I'm just doing it wrong. I had modelled it on my understanding of Meltdown, but my test is more like Spectre because I'm abusing the branch predictor (incompetently).
Compare that to the statement in the Meltdown paper that they didn't get it working on AMD simply because it didn't - I'm not calling them incompetent but if they don't know why it didn't work and think it might work, then that's the last type of good news wanted by AMD (or ARM). It really is a rather simple (and fundamental) issue, there has been a grey cloud hanging over speculative execution since it was conceived (imagine the horrors it could wreak on memory-mapped I/O), but now that engineering curio has become a Problem I can't really see a way out other than locking out all forms of timing from untrusted code, including the ability to run raw code at full speed. That is the inverse of where all these JIT VMs have taken us.
It's sort of like the perfect storm. Complexity, speed, determinism. I came up with a solution but it does not involve retpoline, you must think of the problem from a physics point of view, security is binary its either secure or its not, let me give an example of 1 and 0 The processor is handling 1 to 0 for each bit in question weather it is writing or reading comes after the fact.
My current solution 7th Jan 18 T = (phase doubler) divided by 1 With this equation we can take a step forward in processing logic by adding double phase to the 1 and 0 calculation of a processor With phase i have added checkphase similar to checksum on a binary level But this can be easily added as kernal to all processors This give us a new variable = T T equals the phase of the 1 or the 0 But it does so twice to check and allocate processing by splitting and checking and comparing the digits in a bit twice. It should be clearly mention that an specurative load in user mode does not load the corresponding cache line of kernel data into the L1 cache. This results in 'To be able to actually use this behavior for an attack, an attacker needs to be able to cause the execution of such a vulnerable code pattern in the targeted context with an out-of-bounds index. For this, the vulnerable code pattern must either be present in existing code, or there must be an interpreter or JIT engine that can be used to generate the vulnerable code pattern.'
As described. I don't understand why the Variant 1 works on 'Intel Haswell Xeon CPU, eBPF JIT is off (default state)', which is the most ('the only' for me) important on this report. It is not proved in this report, if I understand correctly. Great article and great job!!!
However, I do not get how these discovered undocumented features are related to 'security vulnerability'. There are hundreds or thousands of methods for stealing information from hard-drive, screen, communication channels and memory. But all these security issues have one unambiguous source. The evil program was installed and run on your computer without your permission and authorization. Doesn't matter how it harms your system after that. Destroys your disk, copies your files or have the unauthorized access to other process memory or processor's cache.
The security vulnerability is the fact that this program could be started on your computer without your permission. If it happens - there are hundreds of methods to steal your information or harm your computer, and these methods are way more simple and reliable than trying to read other process memory. I hope that I miss something and you can help me to figure my mistake out, because 30+ years experience in computer programming and software architecture is challenged by 'the storm in the glass of water' caused by the discovery of 'security vulnerability' of the chip (???). I wouldn't have any problem if the issue has been classified as 'undocumented feature' or bug.
It still had to be fixed and you still have done great job and deserves all credits. But it is hard for me to accept the discovered issue under the tag of 'security'.
I hope you may help me to fix a gap in my understanding. On x86, it is possible to shutdown speculation following an indirect jump with the UD2 instruction. This is described in the Intel® 64 and IA-32 Architectures Optimization Reference Manual: Assembly/Compiler Coding Rule 14. (M impact, L generality) When indirect branches are present, try to put the most likely target of an indirect branch immediately following the indirect branch. Alternatively, if indirect branches are common but they cannot be predicted by branch prediction hardware, then follow the indirect branch with a UD2 instruction, which will stop the processor from decoding down the fall-through path.
I have posted a number of comments that question how this 'security breach' can steal the sensitive information. Only the first two comments were published, but the rest just disappeared (or were blocked for some reason). I repeat my very simple question - suppose I have read-access to all the memory of all other processes. How can intruder utilize this option in order to discover passwords, usernames etc?
Ok, he might be able to find some photos, perhaps. But what about the more sensitive and encrypted information. I suggested a very simple test: suppose, I put the attacker program on my computer - is it able to find the entire text message inside another unknown running process, if I provide some key-words?
Could it be done, if I encrypt the message? Your suggested test is exactly what it can do. Imagine opening up a debugger on the computer capable of reading anything in RAM.
Passwords, usernames, what is open, the content of a document being worked on, the data being shuttled back and forth with a client of a server, keys in use, the lot. The bigger the system, the more there is to look through. Photos, ironically probably not (if they are on disk and not in use). Encrypted information no not without a key or password, but usually the point of something being in memory is that it is being worked on, in other words it is decrypted. Not always, but the security model relies on protected memory being unreadable, encryption is usually only used on files on the disk in case they are accidentally disclosed.
That's why this 'flaw' is so bad - it can read information that is not expected to be encrypted. It's hard to think of a worse-case scenario. Fortunately the patches seem to be practical and the players in the discovery have managed it well and were probably able to prevent mass exploitation at a scale we have never seen before. Could you explain, how would you be able to discover sensitive information (passwords, usernames etc) inside the memory dump of tens of gigabytes without knowing the exact layout? And what does this memory dump worth when it is created with a speed of 2K byte per second?
I suggested a very simple test, which is much easier than stealing of sensitive information and I suggested this test because I doubt that it could be done. You remind the debugger - that's a very good point. You can write a debugger-like application, which attaches to the process of interest and retrieves the information, if it knows how to do it with this particular process. No need to use 'cache-lines' and other tricks.
The most computers are vulnerable to this. Yuri, the fact that sensitive data is hidden in a lot of noise (tens of GB of data dump) is a classic machine learning problem. If a team is determined to use the flaw, (1) experts in computer security would access/grab the data dump, then (2) experts in data-mining/ machine learning would then separate the data from the 'noise'. The best analogy that comes to mind: suppose we find out that everyone's home has a flaw, where the key to their house is buried somewhere in the garden. Since the flaw affects almost anyone, the probability of any one person getting robbed is extremely low, but if your house is known to contain tons of gold, then it's worth mining your entire garden with metal detectors just to find the key.
And entire groups might start deploying an army of miners with metal detectors just to gather as many keys as possible 'just in case' (ex: Gov agencies, misc nefarious groups). Keep in mind that the data's not encrypted, so it's just a matter of finding recurring patterns. One extremely simplified example: parse the data-dump to get all websites, and look for recurring bank websites, then get the data you input immediately afterwards (yes, I'm over-simplifying a LOT but it's just probabilistic algorithms crunching a lot of data). Eventually, one'll obtain your online banking password.
Same for paypal and what not. PS: my field isn't computer security, but rather data mining so to me it's the 'easy part' of the problem, though I'd need an accomplice to get the data-dump. What you say about the debugger does apply if you're running everything as Administrator or root (or on a C64). But in a secured computer, no ordinary user can attach a debugger to a privileged process, that is locked out and is the whole point of brick-wall like memory protection. It doesn't rely on security through obscurity or hoping things are hard to find, but hardware designed to make it absolutely (completely and utterly) impossible for a normal level user to get past. Computers have had this memory protection hardware for decades.
I'm not saying extracting the root password would be trivial, but first off in a server with 10s of GB of RAM you can pretty much guarantee it is all being used for something. Reading any of it is almost certain to disclose sensitive data. It might be some random php in a file cache, but it is still data that someone trusts is behind the security wall.
The Meltdown dumps were produced at 500KB/s, which is plenty fast enough to find something juicy or dump all RAM in less than a day. I mentioned the debugger because it's interactive - you're sitting there perusing stuff interactively, watching how it changes when you do something, like say attempting to log in over SSH. Even 2KB/s would be enough for that. The mechanics of the attack also means you get a massive hardware assist and can effectively search a lot faster for certain things. If you're logged into some random web host they tell you exactly what kernel is running with what patches, and one of the papers on this attack discusses what structures might be vulnerable even with mitigations like address space randomisation. Without memory protection, there is basically no security at all against locally running code.
Comments are closed.
|
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |