freepeople性欧美熟妇, 色戒完整版无删减158分钟hd, 无码精品国产vα在线观看DVD, 丰满少妇伦精品无码专区在线观看,艾栗栗与纹身男宾馆3p50分钟,国产AV片在线观看,黑人与美女高潮,18岁女RAPPERDISSSUBS,国产手机在机看影片

正文內(nèi)容

thesubtleartofe-triage(編輯修改稿)

2024-11-29 17:01 本頁面
 

【文章內(nèi)容簡介】 f users rises, and/or ? Size of work rises, and/or ? Load on the system increases, and/or ? Amounts of data it manages increase Scalability is Hard! ? In small scale settings ? Conditions are easily controlled ? We don’t tend to see failures and recoveries ? Things that can fail include puters and software on them, work links, routers… ? We are not likely to e under attack Fundamental Issues of Scale ? Suppose a machine can do x business transactions per second ? If I double the load to 2x how big and fast a machine should I buy? ? With puters, answer isn’t obvious! ? If the answer is “twice as big” we say the problem scales “l(fā)inearly”. ? Often the answer is “4 times as big” or worse! Such problems scale poorly – perhaps even exponentially! ? Basic insight: “bigger” is often much harder! Does the Inter “Scale”? ? It works pretty well, most of the time ? But if you look closely, it has outages very frequently ? Butler Lampson won the Turing Award ? (to paraphrase): Computer scientists didn’t invent the worldwideweb because they are only interested in building things that work really well. The Web, of course, is notoriously unreliable. But the insight we, as puter scientists, often miss is that for the Web doesn’t need to work well! ? A “reliable web” – an example of an oxymoron? ? Inter scales but has low reliability How do technologies scale? ? One of the most critical issues we face! ? The bottom line is that, on the whole ? Very few technologies scale well ? The ones that do tend to have poor reliability and security properties ? Scale introduces major forms of plexity ? And large systems tend to be unstable, hard to administer, and “fragile” under stress Web scaling issues ? A very serious problem for popular sites ? Most solutions work like this: ? Your site “offloads” data to a web hosting pany ? Example is Akamai, or Exodus ? They replicate your pages at many sites worldwide ? Ideally, your customers see better performance ? Second approach: Digital Island ? They focus on giving better connections to the Inter backbone, avoiding ISP congestion… Akamai Approach ? They cut deals with lots of ISPs ? Give us room in your machine room ? In fact, you should pay us for this! ? We’ll put our server there ? And it will handle so much web traffic… ? That your lines will be less loaded, since nothing will need to go out to the backbone ? And this will save you big bucks! A Good Idea? ? Akamai approach focuses on “rarely changing” data ? Example: pictures used on your web pages ? Nonexample: the pages themselves, which are often constructed in a customized way ? PreAkamai: Your web site handles all the traffic for constructing pages and also handing out the pictures and static stuff ? PostAkamai: You hand out the main pages but the URLs for pictures point to Akamai web servers Pre and PostAkamai ? PreAkamai, the pages fetched by the browser are a mass of URLs ? And these point to things like pictures and ads stored at ? So to open a page, the user ? Sends a request ? Fetches an index page ? Then
點擊復(fù)制文檔內(nèi)容
教學(xué)課件相關(guān)推薦
文庫吧 www.dybbs8.com
備案圖片鄂ICP備17016276號-1