文档结构  
翻译进度:51%     翻译赏金:0 元 (?)    ¥ 我要打赏

当我在2016年10月离开待了21年的微软并在这个行业干了35年后,我花了些时间来反思我这些年学到的东西。这是一个草率编辑的版本。原谅它的长度!

成为一个熟练的程序员你需要知晓惊人数量的事情——语言细则,应用编程接口,算法,数据结构,系统和工具。这些东西一直在变化——新的语言和编程环境层出不穷,似乎总有一些“人人”都在使用的热门新工具或新语言。紧跟趋势并保持熟练是很重要的。木匠需要知道如何为一项工作挑选合适的锤子和钉子,并且能够把钉子钉得又直又准。

第 1 段(可获 1.71 积分)

同时,我发现有一些概念和策略适用于广泛的场景并且能跨越几十年。我们已经看到了底层设备的性能和能力的多个数量级的变化,但是某些关于系统设计的方法任然是相关的。这比任何具体的实施更为重要。理解这些反复出现的主题对分析和设计我们构建的复杂系统有很大帮助。

谦逊和自我意识

这不仅限于编程,但在像计算机这样表现出如此频繁变化的领域,一个人需要谦逊和自我意识的良好平衡。总有更多要学的并且总有人能够帮助你学习——如果你愿意并对此保持开放态度。一个人既需要谦逊去认识并承认不知道的事,又需要自我意识以给予信心去掌握新的领域并应用已经知道的。我见过的最大挑战是,当某人在一个深领域工作了很长时间,“忘记”在学习新事物方面有多擅长。最好的学习实际来自于把手弄脏和建造一些东西,即使它只是一个模型或一次攻击。我认识的最好的程序员既有对技术的广泛了解,同时又会花时间深入一些技术然后成为专家。最深入的学习发生在你与真正困难的问题作斗争的时候。

第 2 段(可获 3.31 积分)

端对端参数

回到1981年,Jerry Saltzer, Dave Reed和Dave Clark在互联网和分布式系统方面做着前期工作,并且撰写了关于端对端参数的经典描述 classic description。在互联网上存在许多错误信息,因此返回阅读原始文件是很有用的。他们谦逊地没有声称发明——在他们看来这是一项适用于众多领域的共同工程策略,而不仅仅是在通信方面。他们只是写下来并收集例子。一个小释义是:

第 3 段(可获 1.2 积分)

当要在系统中实现某些功能时,只有在拥有系统端点的知识和参与下才能正确地、完整地实现该功能。在一些情况下,由于性能原因,系统某些内部组件的部分安装启用可能很重要。

SRC文件称其为“参数”,尽管在维基百科和其它一些地方它已经被提升为“原则”。事实上,把它作为一个参数更合适——按照他们详细说明的,对系统设计者来说最困难的问题之一就是决定如何划分系统组件之间的职责。这最终成为一场涉及权衡利弊的讨论,当你在划分功能、隔离复杂性并尝试设计出一款能灵活适应不断变化的需求的可靠的、高性能系统时。

第 4 段(可获 1.83 积分)

关于互联网的大量讨论聚焦于通讯系统,但是端对端参数适用于更广泛的环境。分布式系统中的一个例子是“最终一致性”的想法。最终一致的系统可以通过让系统元素进入暂时不一致状态得到优化和简化,知道有一个更大的端到端流程可以解决这些不一致性。我喜欢关于分级订货系统(如Amazon所使用的)的例子,它不需要每个请求都通过中央库存控制瓶颈,但是整个系统在任何情况下需要有一些类型的解决系统,例如通知客户这本书已经缺货。最后一本书最终可能会在订单完成之前被仓库的叉车碾过。一旦你意识到一个端到端的解决系统是必要的且到位了,系统的内部设计就可以利用它进行优化。

第 5 段(可获 2.34 积分)

实际上,正是这种设计灵活性服务于不断发展的性能优化或交付的其他系统特性,使端到端的方法如此强大。端对端思维通常允许内部性能的灵活性,整个系统因此更加稳固并且适应每个组件特性的变化。这就让端对端的方法“抗脆弱”并且随着时间有弹性的改变。

端到端方法的一个含义是,你想要非常小心的添加层和功能,从而消除整体性能的灵活性。(或其他的灵活性,但是性能,尤其是延迟,往往是特殊的。)如果你曝光你所创建的层的原始性能,端到端方法可以利用这种性能来优化他们的特定需求。如果你损害了这一性能,即使在提供重要的增值功能的服务中,也会消除设计的灵活性。

第 6 段(可获 1.83 积分)

当你有一个庞大而复杂的系统来将整个团队分配给内部组件时,端对端参数与组织设计相交叉。这些团队的自然倾向是扩展这些组件的功能,通常的方式开始于消除应用程序的设计灵活性,试图在应用上提供端对端的功能。

应用端到端方法的挑战之一是确定端在哪里。“小跳蚤有更小的跳蚤……如此循环往复。”

聚焦复杂性

第 7 段(可获 1.09 积分)

编码是一个非常精确的艺术,每一行的执行都需要正确的程序运行。但这是误导。程序在组件的整体复杂性或组件交互的复杂性上并不统一。最健壮的程序以一种让系统的重要部分看起来简单明了的方式来隔离复杂性,并以简单的方式与系统中的其他组件进行交互。复杂性隐藏可以与信息隐藏和数据抽象等其他设计方法同构,但如果您真的专注于识别复杂性的所在以及如何隔离它,我发现有一种不同的设计敏感性。

第 8 段(可获 1.4 积分)

我回到说了一遍又一遍的例子,在我写的屏幕重画算法中,是由早期的字符视频终端编辑器,就像 VI 和EMACS。早期的视频终端实现的控制序列的绘画特点的核心作用以及额外的显示功能显示像滚动优化的电流线或上或下或插入新的行或行内的运动特征。这些命令各有不同的成本,不同厂家的设备成本不同。(见TERMCAP代码链接,已经全部历史。)全屏幕应用程序如文本编辑器想尽快更新屏幕,因此需要优化其使用这些控制序列,从一个状态到另一个过渡。

第 9 段(可获 1.68 积分)

这些应用程序被设计出来的,因此潜在的复杂性是被隐藏起来的。 修改文本缓冲区的系统(在功能上的大多数创新发生于此)完全忽略了这些更改是如何转换为屏幕更新命令的。这是可能的,因为计算内容中任何更改的最优更新的性能开销都被实际执行终端本身的更新命令的性能开销所淹没。在系统设计中,性能分析是决定如何隐藏复杂性的关键部分,这是系统设计中的一个常见模式。 屏幕更新程序可以对底层文本缓冲区中的更改进行异步处理,并且可以独立于缓冲区更改的实际历史序列。缓冲区是 如何改变的并不重要,但只是改变了什么这种异步耦合的组合,消除组件之间交互的历史路径依赖关系,并具有一种自然的交互方式来有效地进行批处理,这是用来隐藏耦合复杂性的共同特征。

第 10 段(可获 2.16 积分)

隐藏复杂性的成功与否并不是由组件来进行隐藏的,而是由组件的使用者决定的。这就是为什么组件提供者通常对组件的端到端使用的某些部分持以挑剔态度的原因之一。他们需要清楚地了解系统的其余部分如何与他们的组件进行交互,以及内部的复杂性如何(以及是否)泄漏出来。经常会出现像“这个组件很难使用”这样的反馈,通常意味着这没能有效地隐藏内部的复杂性或者没有选择一个能够隐藏复杂性的功能边界。

第 11 段(可获 1.41 积分)

分层与组件化

系统设计人员扮演一个基本的角色,决定如何将系统分解为组件和层; 决定要做什么,从其他地方拿什么来做决定。开源可以在“构建与购买”的决策中保持资金的变化,但动态是一样的。 在大规模工程中,一个重要的因素是理解这些决策将如何随着时间的推移而发挥作用。从根本上改变我们作为程序员所做的一切事情,所以这些设计选项(分层与组件化)不仅是在当下进行评估的, 而且在未来的几年里随着产品的不断发展而进行评估。

第 12 段(可获 1.35 积分)

Here are a few things about system decomposition that end up having a large element of time in them and therefore tend to take longer to learn and appreciate.

  • Layers are leaky. Layers (or abstractions) are fundamentally leaky. These leaks have consequences immediately but also have consequences over time, in two ways. One consequence is that the characteristics of the layer leak through and permeate more of the system than you realize. These might be assumptions about specific performance characteristics or behavior ordering that is not an explicit part of the layer contract. This means that you generally are more vulnerable to changes in the internal behavior of the component that you understood. A second consequence is it also means you are more dependent on that internal behavior than is obvious, so if you consider changing that layer the consequences and challenges are probably larger than you thought.
  • Layers are too functional. It is almost a truism that a component you adopt will have more functionality than you actually require. In some cases, the decision to use it is based on leveraging that functionality for future uses. You adopt specifically because you want to “get on the train” and leverage the ongoing work that will go into that component. There are a few consequences of building on this highly functional layer. 1) The component will often make trade-offs that are biased by functionality that you do not actually require. 2) The component will embed complexity and constraints because of functionality you do not require and those constraints will impede future evolution of that component. 3) There will be more surface area to leak into your application. Some of that leakage will be due to true “leaky abstractions” and some will be explicit (but generally poorly controlled) increased dependence on the full capabilities of the component. Office is big enough that we found that for any layer we built on, we eventually fully explored its functionality in some part of the system. While that might appear to be positive (we are more completely leveraging the component), all uses are not equally valuable. So we end up having a massive cost to move from one layer to another based on this long-tail of often lower value and poorly recognized use cases. 4) The additional functionality creates complexity and opportunities for misuse. An XML validation API we used would optionally dynamically download the schema definition if it was specified as part of the XML tree. This was mistakenly turned on in our basic file parsing code which resulted in both a massive performance degradation as well as an (unintentional) distributed denial of service attack on a w3c.org web server. (These are colloquially known as “land mine” APIs.)
  • Layers get replaced. Requirements evolve, systems evolve, components are abandoned. You eventually need to replace that layer or component. This is true for external component dependencies as well as internal ones. This means that the issues above will end up becoming important.
  • Your build vs. buy decision will change. This is partly a corollary of above. This does not mean the decision to build or buy was wrong at the time. Often there was no appropriate component when you started and it only becomes available later. Or alternatively, you use a component but eventually find that it does not match your evolving requirements and your requirements are narrow enough, well-understood or so core to your value proposition that it makes sense to own it yourself. It does mean that you need to be just as concerned about leaky layers permeating more of the system for layers you build as well as for layers you adopt.
  • Layers get thick. As soon as you have defined a layer, it starts to accrete functionality. The layer is the natural throttle point to optimize for your usage patterns. The difficulty with a thick layer is that it tends to reduce your ability to leverage ongoing innovation in underlying layers. In some sense this is why OS companies hate thick layers built on top of their core evolving functionality — the pace at which innovation can be adopted is inherently slowed. One disciplined approach to avoid this is to disallow any additional state storage in an adaptor layer. Microsoft Foundation Classes took this general approach in building on top of Win32. It is inevitably cheaper in the short term to just accrete functionality on to an existing layer (leading to all the eventual problems above) rather than refactoring and recomponentizing. A system designer who understands this looks for opportunities to break apart and simplify components rather than accrete more and more functionality within them.
第 13 段(可获 9.66 积分)

Einsteinian Universe

I had been designing asynchronous distributed systems for decades but was struck by this quote from Pat Helland, a SQL architect, at an internal Microsoft talk. “We live in an Einsteinian universe — there is no such thing as simultaneity. “ When building distributed systems — and virtually everything we build is a distributed system — you cannot hide the distributed nature of the system. It’s just physics. This is one of the reasons I’ve always felt Remote Procedure Call, and especially “transparent” RPC that explicitly tries to hide the distributed nature of the interaction, is fundamentally wrong-headed. You need to embrace the distributed nature of the system since the implications almost always need to be plumbed completely through the system design and into the user experience.

第 14 段(可获 1.59 积分)

Embracing the distributed nature of the system leads to a number of things:

  • You think through the implications to the user experience from the start rather than trying to patch on error handling, cancellation and status reporting as an afterthought.
  • You use asynchronous techniques to couple components. Synchronous coupling is impossible. If something appears synchronous, it’s because some internal layer has tried to hide the asynchrony and in doing so has obscured (but definitely not hidden) a fundamental characteristic of the runtime behavior of the system.
  • You recognize and explicitly design for interacting state machines and that these states represent robust long-lived internal system states (rather than ad-hoc, ephemeral and undiscoverable state encoded by the value of variables in a deep call stack).
  • You recognize that failure is expected. The only guaranteed way to detect failure in a distributed system is to simply decide you have waited “too long”. This naturally means that cancellation is first-class. Some layer of the system (perhaps plumbed through to the user) will need to decide it has waited too long and cancel the interaction. Cancelling is only about reestablishing local state and reclaiming local resources — there is no way to reliably propagate that cancellation through the system. It can sometimes be useful to have a low-cost, unreliable way to attempt to propagate cancellation as a performance optimization.
  • You recognize that cancellation is not rollback since it is just reclaiming local resources and state. If rollback is necessary, it needs to be an end-to-end feature.
  • You accept that you can never really know the state of a distributed component. As soon as you discover the state, it may have changed. When you send an operation, it may be lost in transit, it might be processed but the response is lost, or it may take some significant amount of time to process so the remote state ultimately transitions at some arbitrary time in the future. This leads to approaches like idempotent operations and the ability to robustly and efficiently rediscover remote state rather than expecting that distributed components can reliably track state in parallel. The concept of “eventual consistency” succinctly captures many of these ideas.
第 15 段(可获 4.5 积分)

I like to say you should “revel in the asynchrony”. Rather than trying to hide it, you accept it and design for it. When you see a technique like idempotency or immutability, you recognize them as ways of embracing the fundamental nature of the universe, not just one more design tool in your toolbox.

Performance

I am sure Don Knuth is horrified by how misunderstood his partial quote “Premature optimization is the root of all evil” has been. In fact, performance, and the incredible exponential improvements in performance that have continued for over 6 decades (or more than 10 decades depending on how willing you are to project these trends through discrete transistors, vacuum tubes and electromechanical relays), underlie all of the amazing innovation we have seen in our industry and all the change rippling through the economy as “software eats the world”.

第 16 段(可获 1.79 积分)

A key thing to recognize about this exponential change is that while all components of the system are experiencing exponential change, these exponentials are divergent. So the rate of increase in capacity of a hard disk changes at a different rate from the capacity of memory or the speed of the CPU or the latency between memory and CPU. Even when trends are driven by the same underlying technology, exponentials diverge. Latency improvements fundamentally trail bandwidth improvements. Exponential change tends to look linear when you are close to it or over short periods but the effects over time can be overwhelming. This overwhelming change in the relationship between the performance of components of the system forces reevaluation of design decisions on a regular basis.

第 17 段(可获 1.55 积分)

A consequence of this is that design decisions that made sense at one point no longer make sense after a few years. Or in some cases an approach that made sense two decades ago starts to look like a good trade-off again. Modern memory mapping has characteristics that look more like process swapping of the early time-sharing days than it does like demand paging. (This does sometimes result in old codgers like myself claiming that “that’s just the same approach we used back in ‘75” — ignoring the fact that it didn’t make sense for 40 years and now does again because some balance between two components — maybe flash and NAND rather than disk and core memory — has come to resemble a previous relationship).

第 18 段(可获 1.56 积分)

Important transitions happen when these exponentials cross human constraints. So you move from a limit of two to the sixteenth characters (which a single user can type in a few hours) to two to the thirty-second (which is beyond what a single person can type). So you can capture a digital image with higher resolution than the human eye can perceive. Or you can store an entire music collection on a hard disk small enough to fit in your pocket. Or you can store a digitized video recording on a hard disk. And then later the ability to stream that recording in real time makes it possible to “record” it by storing it once centrally rather than repeatedly on thousands of local hard disks.

第 19 段(可获 1.55 积分)

The things that stay as a fundamental constraint are three dimensions and the speed of light. We’re back to that Einsteinian universe. We will always have memory hierarchies — they are fundamental to the laws of physics. You will always have stable storage and IO, memory, computation and communications. The relative capacity, latency and bandwidth of these elements will change, but the system is always about how these elements fit together and the balance and tradeoffs between them. Jim Gray was the master of this analysis.

Another consequence of the fundamentals of 3D and the speed of light is that much of performance analysis is about three things: locality, locality, locality. Whether it is packing data on disk, managing processor cache hierarchies, or coalescing data into a communications packet, how data is packed together, the patterns for how you touch that data with locality over time and the patterns of how you transfer that data between components is fundamental to performance. Focusing on less code operating on less data with more locality over space and time is a good way to cut through the noise.

第 20 段(可获 2.31 积分)

Jon Devaan used to say “design the data, not the code”. This also generally means when looking at the structure of a system, I’m less interested in seeing how the code interacts — I want to see how the data interacts and flows. If someone tries to explain a system by describing the code structure and does not understand the rate and volume of data flow, they do not understand the system.

A memory hierarchy also implies we will always have caches — even if some system layer is trying to hide it. Caches are fundamental but also dangerous. Caches are trying to leverage the runtime behavior of the code to change the pattern of interaction between different components in the system. They inherently need to model that behavior, even if that model is implicit in how they fill and invalidate the cache and test for a cache hit. If the model is pooror becomes poor as the behavior changes, the cache will not operate as expected. A simple guideline is that caches must be instrumented — their behavior will degrade over time because of changing behavior of the application and the changing nature and balance of the performance characteristics of the components you are modeling. Every long-time programmer has cache horror stories.

第 21 段(可获 2.64 积分)

I was lucky that my early career was spent at BBN, one of the birthplaces of the Internet. It was very natural to think about communications between asynchronous components as the natural way systems connect. Flow control and queueing theory are fundamental to communications systems and more generally the way that any asynchronous system operates. Flow control is inherently resource management (managing the capacity of a channel) but resource management is the more fundamental concern. Flow control also is inherently an end-to-end responsibility, so thinking about asynchronous systems in an end-to-end way comes very naturally. The story of buffer bloatis well worth understanding in this context because it demonstrates how lack of understanding the dynamics of end-to-end behavior coupled with technology “improvements” (larger buffers in routers) resulted in very long-running problems in the overall network infrastructure.

第 22 段(可获 1.7 积分)

The concept of “light speed” is one that I’ve found useful in analyzing any system. A light speed analysis doesn’t start with the current performance, it asks “what is the best theoretical performance I could achieve with this design?” What is the real information content being transferred and at what rate of change? What is the underlying latency and bandwidth between components? A light speed analysis forces a designer to have a deeper appreciation for whether their approach could ever achieve the performance goals or whether they need to rethink their basic approach. It also forces a deeper understanding of where performance is being consumed and whether this is inherent or potentially due to some misbehavior. From a constructive point of view, it forces a system designer to understand what are the true performance characteristics of their building blocks rather than focusing on the other functional characteristics.

第 23 段(可获 1.86 积分)

I spent much of my career building graphical applications. A user sitting at one end of the system defines a key constant and constraint in any such system. The human visual and nervous system is not experiencing exponential change. The system is inherently constrained, which means a system designer can leverage (must leverage) those constraints, e.g. by virtualization (limiting how much of the underlying data model needs to be mapped into view data structures) or by limiting the rate of screen update to the perception limits of the human visual system.

第 24 段(可获 1.15 积分)

The Nature of Complexity

I have struggled with complexity my entire career. Why do systems and apps get complex? Why doesn’t development within an application domain get easier over time as the infrastructure gets more powerful rather than getting harder and more constrained? In fact, one of our key approaches for managing complexity is to “walk away” and start fresh. Often new tools or languages force us to start from scratch which means that developers end up conflating the benefits of the tool with the benefits of the clean start. The clean start is what is fundamental. This is not to say that some new tool, platform or language might not be a great thing, but I can guarantee it will not solve the problem of complexity growth. The simplest way of controlling complexity growth is to build a smaller system with fewer developers.

第 25 段(可获 1.81 积分)

Of course, in many cases “walking away” is not an alternative — the Office business is built on hugely valuable and complex assets. With OneNote, Office “walked away” from the complexity of Word in order to innovate along a different dimension. Sway is another example where Office decided that we needed to free ourselves from constraints in order to really leverage key environmental changes and the opportunity to take fundamentally different design approaches. With the Word, Excel and PowerPoint web apps, we decided that the linkage with our immensely valuable data formats was too fundamental to walk away from and that has served as a significant and ongoing constraint on development.

第 26 段(可获 1.38 积分)

I was influenced by Fred Brook’s “No Silver Bullet” essay about accident and essence in software development. There is much irreducible complexity embedded in the essence of what the software is trying to model. I just recently re-read that essay and found it surprising on re-reading that two of the trends he imbued with the most power to impact future developer productivity were increasing emphasis on “buy” in the “build vs. buy” decision — foreshadowing the change that open-source and cloud infrastructure has had. The other trend was the move to more “organic” or “biological” incremental approaches over more purely constructivist approaches. A modern reader sees that as the shift to agile and continuous development processes. This in 1986!

第 27 段(可获 1.49 积分)

I have been much taken with the work of Stuart Kauffman on the fundamental nature of complexity. Kauffman builds up from a simple model of Boolean networks (“NK models”) and then explores the application of this fundamentally mathematical construct to things like systems of interacting molecules, genetic networks, ecosystems, economic systems and (in a limited way) computer systems to understand the mathematical underpinning to emergent ordered behavior and its relationship to chaotic behavior. In a highly connected system, you inherently have a system of conflicting constraints that makes it (mathematically) hard to evolve that system forward (viewed as an optimization problem over a rugged landscape). A fundamental way of controlling this complexity is to batch the system into independent elements and limit the interconnections between elements (essentially reducing both “N” and “K” in the NK model). Of course this feels natural to a system designer applying techniques of complexity hiding, information hiding and data abstraction and using loose asynchronous coupling to limit interactions between components.

第 28 段(可获 2.06 积分)

A challenge we always face is that many of the ways we want to evolve our systems cut across all dimensions. Real-time co-authoring has been a very concrete (and complex) recent example for the Office apps.

Complexity in our data models often equates with “power”. An inherent challenge in designing user experiences is that we need to map a limited set of gestures into a transition in the underlying data model state space. Increasing the dimensions of the state space inevitably creates ambiguity in the user gesture. This is “just math” which means that often times the most fundamental way to ensure that a system stays “easy to use” is to constrain the underlying data model.

第 29 段(可获 1.45 积分)

Management

I started taking leadership roles in high school (student council president!) and always found it natural to take on larger responsibilities. At the same time, I was always proud that I continued to be a full-time programmer through every management stage. VP of development for Office finally pushed me over the edge and away from day-to-day programming. I’ve enjoyed returning to programming as I stepped away from that job over the last year — it is an incredibly creative and fulfilling activity (and maybe a little frustrating at times as you chase down that “last” bug).

第 30 段(可获 1.21 积分)

当我到达微软的时候,尽管已经做了十多年的“经理”。但在我1996年来到这里后,才真的学会了管理。微软强调“工程领导是技术领导”。这与我的观点一致,并帮助我接受并承担了更大的管理责任。

在我到来时,最能引起我共鸣的是办公室透明度的基本文化。 经理的工作是设计并使用透明的过程来驱动项目。透明度并不是简单的,自动的,或出于好意的—它需要被设计成系统。最好的透明性来自于能够跟踪单个工程师在其日常活动中的细粒度的输出进度 (完成工作项,bug制造和修复或完成脚本)。注意进度条的颜色(红色/绿色/黄色)! 注意仪表盘的状态(赞/踩)!

第 31 段(可获 1.75 积分)

我曾经说过, 我的工作是设计反馈循环。 透明的流程给过程中的每一个参与者提供了 —从个人工程师到经理—使用所跟踪的数据来驱动进程和结果—并理解他们在整个项目目标中所扮演的角色 —的方法. 最终透明为授权带来了巨大的帮助—经理可以投入越来越多的精力到更接近问题的地方,因为自信他们的工作确实能取得进展. 协调自然也就出现了。

第 32 段(可获 1.15 积分)

关键是这个目标实际上已经被精确地限定了 (包括诸如截止日等关键资源限制). 决策需要不断地在管理链上上下流动,这通常反映出管理层对目标和约束的不精确限定。

当我真的意识到在一个项目中拥有一个独一无二的领导者的重要性时,我已经超越了软件。 工程经理离开了 (后来把我挖到了FrontPage)并且留下的四个人都犹豫是否要承担这个责任 — 至少是因为我们不知道还要坚持多久。我们在技术上都很敏锐,相处得很好,所以我们决定共同来领导这个项目。真是一团糟。一个明显的问题是,我们没有预先分配资源的策略 —管理的最高责任之一!  我们没有真正负责统一目标和界定约束的领导人真的是太糟了。

第 33 段(可获 2.19 积分)

我有一个刻骨铭心的记忆,那是我第一次完全认可,对一个领导者,听力的的重要性。那时我刚刚担任集团发展经理, for Word, OneNote, Publisher and 文本服务. 那时有一个巨大的争议,关于我们如何组织文本服务团队,当时我认真听取了所有关键参与者的意见,认真听取了他们不得不说的话,并把我听到的所有建议做了总结. 当我把写下来的总结拿给他们看的时候,"wow!你居然认真听了我在说什么!" 其中,最关键的是,在我当经理的时候(比如跨平台和系统演变的时候)听取所有参与者的意见真的太重要了. 倾听是一个积极的过程, 这包括尝试理解观点并做出总结其于自身的实践. 当一项很关键的决定开始实施的时候, 确保每一个参与者都被倾听并理解了 (不论他们是否同意这个决定).

第 34 段(可获 2.29 积分)

那是以前的工作,作为网页设计开发经理,我将决策中固有的“操作困境”内化为部分信息。你等待的时间越长,你做出决定可依赖的信息就越多. 但是你等待的时间越长,实际实现它的灵活性就越小。甚至有些时候,你只需要打个电话。

设计组织架构也有些类似. 你想增加资源,这样一个一致的框架也能应用于更多的资源.但是资源越多,你做出正确决定所需要的更多信息也就更难获得..组织架构的设计是两项因素的平衡.软件之所以复杂化,是因为软件的特性可以跨越设计的任意维度。Office用 共享团队 解决这些问题(优先级和资源)有交叉的团队,这样在他们的团队建设中,能分享工作(添加资源)。

第 35 段(可获 2.1 积分)

当你爬上管理阶梯时,你会学到一个肮脏的小秘密是,你和你的新同事不会突然变得聪明,因为你现在有更多的责任。 这强化了该组织作为一个整体,比顶部的领袖更聪明的事实. 在固定的组织结构下,赋予每一层自己的决策权,是一个很棒的做出正确决定的方法. 听取下属建议,为整个组织负责,解释你每个决定背后的理由,是另一把走向胜利的钥匙.令人惊讶的是,害怕做出愚蠢的决定是一个有用的激励因素,能确保你向组织解释你做决定的原因,并确保你听取了下属的意见.

第 36 段(可获 1.43 积分)

结论

在我大学毕业后,第一份工作,的最后一轮面试中,面试人员问我,是否对“systems” 或者 “apps”更感兴趣.我当时并不是很理解这个问题. 现在,在软件的每个层级都可能会出现一些困难的、有趣的问题, 而我对它们一直很感兴趣.并保持学习.

第 37 段(可获 0.73 积分)

文章评论