Blog Home  Sign In RSS 2.0 Atom 1.0 CDF  

  def Softwaremaker() :
         return "William Tay", "<Challenging Conventions />"

  knownType_Serialize, about = Softwaremaker()
 

 Monday, September 11, 2006

EACheader.gif

I will be speaking in The Enterprise Architecture Conference 2006 to be held in Singapore on 25-28 September 2006.

And no - for once, I wont be speaking on Web Services or anything Service-Oriented. Instead, I will tackle head-on a topic that not many people would not like to go near: Enterprise Architecture Security

I will be speaking on the second day (27 Sept 2006: 1430-1530) and quite a few of my governmental clients will be there as well in the audience.

The title is: Security Planning and Strategies In An Enterprise Architecture. The agenda is as follows:

  • Outline of the key Enterprise-based security issues and counter-measures, that is technology-agnostic
  • An examination of general security threats and how to plan and implement security policies and controls for often-performed computer security activities
  • Key best practices in terms of security that can be applied to practical real-life scenarios and implemented solutions such as IP and Data security
  • Auditing and monitoring of systems within an Enterprise

If you are in need to spend SGD2,000 , I hope to see you there.

EACFlyerjpeg.gif

Monday, September 11, 2006 9:15:18 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Saturday, September 09, 2006

    One of the topics presented during the Architecture Track in the just-concluded Microsoft TechED South East Asia 2006 was done by IASA.

    It was done via a very interactive session with the floor questioning the panel from the IASA Malaysian chapter about the value of being an architect and topics of the like.

    One of the questions from the floor was: "Do you believe that architects should be writing code ?"

    It seems that 3/4 of the panel members agree. Some even going on bragging that he is still doing it in assembly, C++ language. hmmm ... In the interest of keeping the session on time so that that the attendees can get home on time amidst the KL jams - I kept my mouth shut.

    Let me open them now.

    Architects shouldnt code. Period. My thoughts. Period. To prevent myself from rambling, which is what I always do when I have a strong opinion on something, let me list them in point forms.

    The term "architect" is a very nebulous term. The hype around it with all the wannabees printing it in bold font on their cards out there (I know a couple of them in Singapore who does them with no shame), who have no idea what is the difference between horizontal and vertical scaling, doesnt make it any better. For better or worse, I only believe in one type - and that is the all-emcompassing solution architect.

    Afterall, aint what all customers want is a solution ? Do you think they really care what is underneath the solution more than that this solution works well, is kept within their budget, perform effectively and efficiently within the constraints of their environment ? Therefore, in this context, I will speak in the context of Solution Architects.

     

    • In the past, code influenced architecture. The limitations of written-code in a chosen-platform in a defined-era is the damning evidence of the limitation of the architecture. VB3 anyone ?

     

    • Biasedness based on preference. A solution architect that has a decade long experience in writing code is usually one that doesnt see the other side of the fence, doesnt know that there are better solutions and worst - refuse to want to see it. Most of the people in this category are usually skilled in one platform over the other and it is very hard for people like that to sit down and analyze a problem without a pre-conceived notion in mind in a very neutral manner. Because of this, the likely solution they are going to propose will have the same limitations of the platform they have been so comfortable with. I dont question the depth. Where is the breadth ? These guys should remain what they are good in and be experts in their domains and probably be paid better than a solution architect. For those who argue that they are equally well-versed in both sides of the fence - Good for you. Stay there. It is very likely you are drawing a lot more than an architect. If you are not, you just need to sell yourself better.

     

    • It is all about the business. Solution Architects bridge the gap between (the returns of) the business with the (returns of) technology. So, yes, they should be just as apt with the Finanical Calculator as much as an Integrated Development Environment or a Installation Panel or Console. They understand the entire depreciation cycle of an enterprise solution much better than the differences between a thread and a process. So yes, the higher the level the language they used to code in (I am talking about 4GL), the better. Bragging how much you still do your work today in First or Second Level Programming Lanaguages doesnt do you any good. In fact, it makes you look bad. It shows the lack of touch you have with your business, how it is run, what are the constraints and the entire business and revenue model. Writing performance drivers, assemblers, kernels, runtimes has nothing to do with a company's business model in a world where Moore's Law still rules.

    It is all about the business, still.

     

    Saturday, September 09, 2006 3:18:43 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Tuesday, March 07, 2006

    Now if XML-RPC aint enough (I had blogged about this here and here), now we can add REST-RPC into the mix. The main difference would be the use of HTTP to provide application semantics via its verbs. This would mean that there would hardly be any XML or Request payload of any kind.

    Tuesday, March 07, 2006 1:01:34 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Wednesday, February 15, 2006

    In an article on eweek, John deVadoss, director, architecture strategy at MSFT Corp made the following quotes:

    [QUOTE]


    Moreover, deVadoss said the edge consists of a provider and consumer model—a provider edge and a consumer edge.

    The consumer edge is the peer-to-peer, Web 2.0 world and the enterprise edge is the SOA, ESB (enterprise service bus) model. In addition, the consumer edge is an asynchronous communications model based on the REST (Representational State Transfer) scheme, and the enterprise edge is based on the Simple Object Access Protocol scheme.

     "REST is a dominant model on the consumer side, and SOAP is the model on the enterprise side," deVadoss said


    I know many people would probably shake their heads now as to the confusion that has arose with this quote. Actually, while I couldnt quite fully agree with everything said above - the nugget to dig here is that one resides on the edge of the enterprise (or the high-end processing machines out there) and the other resides at the side of the consumer.

    In reality though, what ultimately serves the consumer are the enterprises, in some way or another. It is the consumer who pays - no ? Therefore it would be safe to assume that there is a mixture of both schemes in any enterprise.

    While SOAP is probably familiar to many, REST shouldnt be a stranger to many more. It is nothing but how the World Wide Web has been working. It just has a new label OR should I say be saying that the label is stickier now ? SOAP and all the XML acronyms has just emphasized it more. While you may not need to know the URIs, URLs and the way resource locators work or how it came about - you may need to know the word POX (Plain Old XML).

    In simplistic terms - POX is just XML that doesnt really have a defined structure. In contextual terms, POX is XML that is not SOAP.

    Makes more sense now ?

     

    Tuesday, February 14, 2006 4:35:44 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Wednesday, August 31, 2005

    On the back of this, from Paul Fallon's space here, Max Feingold has a real good summary on Whats New with WS-AT, WS-BA and WS-C.

    Should make for some good reading on my looooooooong flight to LAX via the Pacific Route.

    Wednesday, August 31, 2005 1:56:50 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Friday, July 15, 2005

    According to Don's blog here: Indigo has support for REST since Day 1. In the same breadth, he disclosed that it will explicitly support HTTP PUT/POST/DELETE/GET to please the REST purists out there.

    Like Steve mentioned here, I, too, am not a fan of technology stacks that complete with their own ideologies. To me, technology is there just to get a job done efficiently and effectively. My post here attracted quite a fair bit of hits and same-echo-sentiment emails. It even made to ZiffDavis blog list here.

    As Steve has echoed as well, I am also even more convinced now that Indigo is going to be the platform to beat when it comes to distributed communications. No doubt about it.

    I am really glad that the Indigo team, whom I think comprises some of the brightest minds in the software field, is all united to work towards a common goal of creating a religion and ideology-agnostic product. If this is not innovation, I dont know what is.

    Show me the money, Beta1

    Friday, July 15, 2005 4:40:50 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Wednesday, July 13, 2005

    I have been invited by the Ark Group to speak in one of their major technology management conferences in Singapore on the 27th July 2005.

    This conference is themed around Planning and Implementing Service-Oriented Architecture : Developing SOA to Reduce the Cost of Integration, Leverage Legacy Systems and Improve Business Agility.

    My 45-min presentation is titled:

    Design and Security Notes from the Field

    • Distributed Computing
      • Thoughts
      • When to use What
      • More Thoughts
    • Distributed Security
      • Thoughts
      • When to use What
      • More Thoughts

    As usual to my presentation style, I tend to keep my topics interesting and a refreshing experience. The conferences in this field has always been dominated by the academics, researchers and technology vendors who sometimes lose touch of whats happening on the ground. I will go against this bias and inject some field experience in there. It should be an interesting conference for attendees and delegates. I think it will even be much more interesting when I sit down with them for networking lunch.

    I hope to see you there.

    Tuesday, July 12, 2005 10:11:01 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Thursday, July 07, 2005

    I have been following the exchanges between Savas, Jim Webber, and Michi Henning here and here. There are more links but I will leave it up to the interested reader to use a RESTful approach to refer to those resources from the above 2 links.

    To be honest, this has been piped over the newsgroups, forums, conferences, etc for some time now and it is really nothing interesting to debate about, really.

    All this noise has led to a lot of FUD in the field and I get a constant barrage of questions in any of the technology conferences I speak in or attend. I work in the fields out there and therefore I tend to approach technology unlike that of academics, trainers or vendors. I do what clients want in the most efficient way (read:cost_and_resource-effective) possible. I have met a few people from academics as well as from the product vendors who entered the field thinking that just because they can point out which exact page number explains the ds:SignedInfo in _WS-Security Specs_, they can convince and conquer the field and have every single customer out there upgrade their existing technological infrastructure every 6 years.

    Well, the Mainfraimes, the CICS and the COBOL wonks are still out there. Still making a lot of money for its vendors and service providers. People are still driving Ladas and these legacy will be there for some time, probably a long long time.

    We get a lot of younER developers who are very confused with all barrage of technologies out there and sometimes people on the field (which means customers as well) get the short end of the stick when these developers use the wrong technology in the different parts of the technical solution. So, yes, some of the stuff you read in the Daily WTF is not ficticious.

    Sometimes, when I come across comments like these here from Savas's blog here:

    > Microsoft is betting on SOAP and made it a key part of its distributed computing platform, not DCOM.

    Betting on SOAP? Hmmm... .NET remoting does not use SOAP. It uses a binary protocol for performance reasons. So, I'm not sure that Microsoft are "betting on SOAP". They certainly are not for their .NET remoting protocol. And DCOM failed because it could not be made to scale, due to its misguided garbage collection idea. And because DCOM, amazing as that may sound, was even more complex than CORBA.

    Somehow, I either feel that I still dont get the picture or that irrationality is clouding good judgement (still).

    Of course, .NET Remoting doesnt use SOAP. In fact, it used to and is deprecated for good reasons. It is a distributed object technology which implies implicit method invocation. SOAP is not a distributed object technology. It is all about services, all about standard schemas, being explicit in design and yes, it also means dispatching these XML documents on a "hopeless transport" or the Lada of the network protocol today. You cannot compare them just like you cannot compare the performance of objects and services.

    Is this the best we can do ? Of course NOT. Should we all dump our existing heap of scrap metal in our garage and get the shiniest and fastest aluminium today ? Of course. Are we going to empty our bank account, forfeit and compensate on our current loan arrangments to do it ? NO, NOPE, NADA ... This is a just a fact that we have to deal with.

    Having said that, to me, both sets of distributed technologies will have its place to stay, regardless of what the vendors say to sell more and the trainers sell to teach more. Each have its place and their merits tend to show up best if used and deployed wisely in the different layers, tiers and boundaries of a decent, usable and viable solution. When I say Solution, I mean an entire composition of different new and old systems that services a business program, initiative and ultimately a goal. Isnt that what we are building systems for in the first place or have I totally lost my mind and lost track of who my paymaster is ? Dont get me wrong, progress is not possible without making full proof and implementations of the latest rocket science or theories. But Progress can be measured in many ways. To most people, progress is measured by how they can make legacy or existing technologies and architetures last and endure given the rapidly evolving set of standards, protocols and environments and how fast they can go home and spend time with their families as age increases and TTL declines.

    COM+, DCOM, .NET Remoting is something we use very frequently on the field, and for good reasons too. I am known to be a (W3C) SOAP Wonk BUT I will not give them up easily within the innards of my system and I will use SOAP for the reasons it was designed for. In fact, to me, one of the most important features in SOAP is the @role (@actor) and the @mustUnderstand. Or else, I would just stick with just plain old XML.

    Is Microsoft betting on SOAP ? You bet, and so is the entire industry. It is a well known fact that while it is simplistic in design (in fact, this is one of the wierd specification that becomes simpler as the newer iteration evolve. Let hope it remains this way), getting across-the-board agreement is the costly element. In fact, it took 5 years from the original design meeting and prototype to become an “official” specification and it (SOAP 1.1) is still not an official W3C Submission. The cost to each participating organization easily crosses several million USD. The rough estimates to putting the final WS-* specs to bed (if there is such a thing) would easily be more than 20 million USD.

    Just like life, in the field of software engineering, compromise is something we need to work with. I once had a straight-through exhaust pipe the circumference of a toilet bowl fitted underneath my car as well as a wide-open air filter underneath the car bonnet. After 3 months, I couldnt deal with the generated noise as well as the (less-dense) hot air the air cone was taking in from all angles so I dumped both of them. While it may take me slightly longer (by a few seconds, perhaps) to reach the market to buy groceries, the brat in the younger me learnt to deal with it.

    While, I have my own ideas on the Request/Reply MEP, RPC, Document-Literal Messaging, etc and I like to share my research and thoughts with some of the brightest minds in the industry over a hot cup of Java, it is not something I lose sleep or sweat about. It just serves to keep my sanity in check when I still have to deal with OLE and VB3 issues today and it does make good conversation with some of the most intelligent geeks out there.

    Sometimes, I feel the point of technology is lost when people argue over stuff like that. To these people, I recommend a good classic read: Technopoly: The surrender of Culture to Technology by Neil Postman

    Thursday, July 07, 2005 4:40:24 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Saturday, June 18, 2005

    WS-MetadataExchange is used and enabled by default in Indigo and it comes with WS-Policy as well.

    This is one specification that I find really useful because the current convention of using http-get to get the metadata from asmx?WSDL is still a very .NET sort of convention. For my projects, I usually like to put the metadata in a physical file and deploy it as *.wsdl. Other people and toolkits will also have their own naming conventions.

    While using http-get is a very useful convention of retrieving metadata, we cannot assume that all services will support HTTP. In Indigo, it’s common to expose an endpoint that only listens over IPC or SOAP-over-TCP.

    At its current Beta 1 drop, only http / https will be supported. I do forsee that tcp will be supported in the next major drop.

    In Indigo, the WS-MetadataExchange endpoints are added to the (http) service by default at the container-baseAddress/containee-endpointAddress as a mex suffix. You can also choose to define your own custom mex endpoints. Another wonderful thing it supports is that it allows the specifying of a custom metadata file (which I know I will be definitely be using). In other words, instead of getting Indigo to automatically generate the metadata, it will send the requestor to the uri location of the metadata file I specify.

    If you have done some UDDI work before, you would be accustomed to the convention of sending standardized (W3C) SOAP Request messages for querying purposes. WS-MetadataExchange uses the same sort of principles, albeit with different querying implementations.

    Here is how it works. There is no better way to illustrate this than to get really friendly with the wire-layer plumbings. I have a tool here that will give you a good view of the exchange.

    SPONSOR.jpg
    Getting an mcdba certification is not a problem if you have done cisco training as well as other security training programs like mcp but there is no replacement of microsoft training.

    You need to send a standard SOAP request to the listening mex endpoint that goes like this (Replace the square brackets with angle ones)

      [s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing"]
        [s:Header]
          [a:Action s:mustUnderstand="1"]http://schemas.xmlsoap.org/ws/2004/09/mex/GetMetadata/Request[/a:Action]
          [a:MessageID]uuid:f22ee227-3eeb-460d-9352-1a34e97bf911;id=0[/a:MessageID]
          [a:To s:mustUnderstand="1"]http://localhost/servicemodelsamples/service.svc/mex[/a:To]
          [a:ReplyTo]
            [a:Address]http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous[/a:Address]
          [/a:ReplyTo]
        [/s:Header]
        [s:Body]
          [GetMetadata xmlns="http://schemas.xmlsoap.org/ws/2004/09/mex" /]
        [/s:Body]
        [!-- OR an Empty SOAP Body will do just as fine --]
        [!-- s:Body /--]
      [/s:Envelope]

    In the tool I had build, you need to put the a:Action value (in bold above) into the SOAPAction textbox as this has to be propagated up to the HTTP-Headers as a SOAPAction header. This is one of the sticklers of using http to query for any SOAP Web Service. Once you clicked post, you will see a good-looking response with all the metadata of WSDL and WS-Policy in the SOAP body. I took out a lot of details in the constraints of space. You should be able to get the point.

      [s:Envelope xmlns:a="http://schemas.xmlsoap.org/ws/2004/08/addressing" xmlns:s="http://www.w3.org/2003/05/soap-envelope"]
        [s:Header]
          [a:Action s:mustUnderstand="1"]http://schemas.xmlsoap.org/ws/2004/09/mex/GetMetadata/Response[/a:Action]
          [a:RelatesTo]uuid:f22ee227-3eeb-460d-9352-1a34e97bf911;id=0[/a:RelatesTo]
          [a:To s:mustUnderstand="1"]http://schemas.xmlsoap.org/ws/2004/08/addressing/role/anonymous[/a:To]
          [ActivityId xmlns="http://schemas.microsoft.com/2004/09/ServiceModel/Diagnostics"]uuid:a55fa135-e1b9-4557-a7c3-39cbf042f85d[/ActivityId]
        [/s:Header]
        [s:Body]
          [wsx:Metadata xmlns:wsx="http://schemas.xmlsoap.org/ws/2004/09/mex"]
            [wsx:MetadataSection Dialect="http://schemas.xmlsoap.org/wsdl/" Identifier="http://tempuri.org/"]
              [wsdl:definitions]
                [!--WSDL and XSD GUNK--]
                [!--XSD GUNK--]
                [!--WS-Policy GUNK--]
              [/wsdl:definitions]
            [/wsx:MetadataSection]
          [/wsx:Metadata]
        [/s:Body]
      [/s:Envelope]

    You will be able to load endpoints, bindings and types dynamically at runtime with such information.

    Do take note that Indigo is extremely strict about the mediaTypes headers and you need to set the contentType of the Http-Headers to application/soap+xml. This can be done in the configuration file.

    Do download it and try it.

    If you think this is a cool new feature in the Microsoft technology space, it is not.

    Web Services Enhancements (WSE) 2.0 also had it. All you need is to send in a "http://schemas.microsoft.com/wse/2003/06/RequestDescription" in the Action of a SOAP Message request to a listening WSE 2.0 service and you will be presented with the WSDL as well. This is what WseWsdl2.exe does you switch in a TCP address.

    Saturday, June 18, 2005 5:08:50 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Thursday, May 05, 2005

    Fellow Microsoft Regional Director and a software legend, Rocky Lhotka posted an interesting entry on Message-based WebMethod "overloading" here.

    In it, he recommends (and I quote) that service methods use a request/response or just request method signature: 

       response = f(request)

    or

       f(request)

     "request" and "response" are both messages, defined by a type or schema (with a little "s", not necessarily XSD).

    The actual procedure, f, is then implemented as a Message Router, routing each call to an appropriate handler method depending on the specific type of the request message.

    I couldnt agree more. While easier to comprehend, the practice of passing parameters to a Web Method call often sends the wrong messages across as (W3C) SOAP as just another RPC. Ultimately, if you have no idea and dont even want to grok the innards of the XMLSerializer, you would really think you are passing method parameters across or worse, think that you are ferrying objects across the wire.

    Therefore, I firmly believe that it is for the good of all if you explicity specify that you are expecting a XMLDocument / XMLElement / XMLNode and you will dispatch the same on your endpoint. What you do at your implementation (whether it is serializing from a type to deserializing from one) is placed squarely at the mercy of the tools in your hand or the knowledge in your head.

    With the current state of tools (or lack there-of) today, I sit on the same camp as Aaron Skonnard and Christian Weyer (plus others, I believe) as I believe firmly in the contract-first approach. Good devs should dig what is going on as close to the wire as possible, at least once. Then they can use all the wizards and helpers they want. This will help them understand better what is going on if they should need to troubleshoot later (leaky abstractions) or find other ways to improve performance.

    This is just something that my team and me here have gathered over the years esp when I have got many developers out there working on the same solution offshore in different countries.

    While I agree that not many people out there enjoy or want to grok the angle brackets and that the lack of tools out there is hugely to be blamed, tools make a productive developer but not necessarily a proficient one.

    Until today, I still come across developers that still think Web Services are still about transferring an object across the wire. Having terms like "Web References" and "proxies", even deemed to be more abstract and dev-friendly, does potray the wrong ideas across.

    I have always recommended younger developers who are interested in learning about Web Services / ASMX and SOAP to try out the (just-now-defunct) Microsoft SOAP Toolkit first before moving on. I find that to be a great interesting way to learn about XML SOAP Services as abstractions are kept to a minimum. Another great SOAP Toolkit in the face of Microsoft's non-supporting stance of its own is Simon Fell's PocketSOAP.

    Another fellow Regional Director, Bill Wagner (who authored an very impressive book "Effective C#") posted his solution to Rocky's post here. I have used the same sort of approach that Bill had documented before and it is good and does serve its purpose. However (and please correct me if I am wrong), it bounds the message contracts and datatypes tightly to the WSDL. If I am going to add a third Request / Response pair of classes, it will render my initial WSDL invalid (unless of course, if I am willing to add an additional endpoint)

    I worked on a project before which specify that (newer) datatypes are to be added in phases and therefore, we had to decouple the XSD from the WSDL (which is in accordance with one of the best practice of WSDL Deployments -- modular and the separation of concerns). Oh, by the way, this is practiced in Indigo.

    I don't know if there is a way to decouple the XSDs from the WSDL in VS2003 today. Even if there is, I am guessing it is a difficult hack at best. So, what I did was to create the XSDs for each datatype as they come along and do a XSD import in my WSDL. At my message exchange contract, I used an open content type with xsd:any. Thereafter, I author my own WSDL with the help of Christian's and Thinktecture's WSContractFirst. Since the message and the datatype XSDs are all imported, the wsdl actually has a small footprint now. With the wsdl /server switch, xsd:any becomes an XMLElement type. For abstraction within my service, I changed it to an object in my implementation detail.

    Note: .NET 1.1's WSDL.exe /server switch, in my mind, is still fairly limited and comes with a couple of annoying things I didn't like BUT I will expand this in detail later.

    [System.Xml.Serialization.XmlTypeAttribute([Namespace]:="urn:softwaremaker-n
    et.swmstoreexchangemessages"]> _
    Public Class Response
      Public Result As Result
    End Class

    [System.Xml.Serialization.XmlTypeAttribute([Namespace]:="urn:softwaremaker-n
    et.swmstoreexchangemessages")] _
    Public Class Result
      [System.Xml.Serialization.XmlElement(ElementName:="Book",
    Type:=GetType(BookType),
    Namespace:="urn:softwaremaker-net.swmstoreextendedtypes"), _
      System.Xml.Serialization.XmlElement(ElementName:="CD",
    Type:=GetType(CDType),
    Namespace:="urn:softwaremaker-net.swmstoreextendedtypes"), _
      System.Xml.Serialization.XmlElement(ElementName:="Anything",
    Type:=GetType(AnyType),
    Namespace:="urn:softwaremaker-net.swmstoreextendedtypes")] _
      Public Any As Object
    End Class

      Public Function
    ProcessRequest([System.Xml.Serialization.XmlElementAttribute([Namespace]:="u
    rn:softwaremaker-net.swmstoreexchangemessages")] ByVal Request As Object) As [System.Xml.Serialization.XmlElementAttribute("Response",
    [Namespace]:="urn:softwaremaker-net.swmstoreexchangemessages")] Response
       '...
       End Function

    Once the consumer points to my wsdl (I turned off documentation of asmx by default), all the XSDs are imported by VS2003 as well (and that is a good thing). The xsd:any is still an XMLElement over at their end and we leave it as it is since we cannot control what they do there. The consumer can choose to deal with the raw XML if they want to OR do an xsd.exe imported.xsd /c and let the XMLSerializer do its (limited) magic .

    In this sense, no matter how many more datatypes I add or remove, my wire-contract remains the same. I just let the endpoints serialize / deserialize the open-content xml (xsd:any). In my project I had mentioned earlier, I have one asmx endpoint servicing multiple consumers each sending different messages that conforms to different XSDs (For example, GroupA sends BookType and GroupB sends the CDType as well as a GroupC next time that sends a type that is unknown to me today). The thing I need to take care of is to send the appropriate datatype xsd to the appropriate groups so they can serialize / deserialize into the appropriate types. As you can see from the code snippet above, at my asmx, I will just add any new datatypes that come along.

    While, this may seem like too much compared to Bill's solution above, it was necessary to decouple the datatypes from my wsdl in that project and the XSD Editor (codenamed: daVinci ?) in VS.NET2003 was the best tool I had in hand at that time. Developers are too comfortable with objects. This is obviously natural with the decade-long evolution of Object-Orientation and the power of abstractions the OO tools and languages bring today.

    However, other factors like time, extensibility, standardization make abstractions expensive...and among others, these are all factors that make up the principles of the move towards Service-Orientation. Now, if we have to make and designed services to be explicit, this means that devs need to know they are explicity invoking a service across the wire, how far can the tools go to hide all the plumbings and yet not hide the fact that the service is on the other side and not just a mere shadow copy of a remoting object that seems to reside on the same application domain.

    Will Service-Orientation fail to hit the mainstream because of this ? I dont know. I guess only time will tell.

    Thursday, May 05, 2005 2:22:18 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Friday, February 11, 2005

    I have been a very interested observer in the recent debate between Tim and Clemens on their definition of the contract in the context of XSD/WSDL/Policy. Aaron has this to share as well which I read with great interest.

    I have done a fair amount of work in preparing to give some prezzos and talks about XSD/WSDL/Policy this year. This is the feedback I had gathered over past few months when developers questioned the reach of interoperability of XML SOAP Services...and most of the time, they have no idea what is going on with the XMLSerializer and the WSDL Generator.

    Although I am on Clemen's side when he states that:

    "Staring at WSDL is about as interesting as looking at the output of “midl /Oicf”."

    I do stand on different ground as him when I believe that developers still need to know what is going on in the deeper plumbings of things. Semantics discovery is still a distance away. While the RAD approach of VS.NET and Indigo is great in churning out developer and business productivity, the real world of the enterprise is rather different. Working in a fairly large SI give me an in-depth look at how disparate systems MUST BE made to integrate and work with each other. Introducing newer, better systems such as Indigo is not even an option at this point in time.

    WSDL-generating tools are still very primtive today. There is no easy way to really extend or version my XML SOAP Services without mucking around with the brackets of WSDL today. If you implement some kind of a proxy-middleware layer approach as a messaging layer (known as the Enterprise Service or Message Bus to some), it is highly likely that it is doing things to your XSD or WSDL to attain its advertised benefits. The point is --- You cannot run away from having to muck around with it and we are not even talking about interoperability yet.

    Clemens, however, did state his point is only valid "if you’ll be sticking to Indigo". Unfortunately, I dont have that luxury to work in an environment with a single homogeneous platform and I really doubt that will change in the real world for many years to come. So besides having to know what the wire-format looks like, it is equally important to agree to that format first and this is where XSD/WSDL/Policy comes in.

    I think Aaron hit the point in his post when he did a comparison of roles between IDL and XSD/WSDL/Policy plays in the definition of a contract. However, I would read WSDL than IDL anytime of the day.

    I believe that the WSDL generated by default (*.asmx?WSDL) from VS.NET is difficult to read and understand before it is not in-line with proper practices. Modularity, Re-usability and Separation of concerns are not being diligently practiced. That is fine, and like what Clemens say, as it is not meant to be read by many humans...but by not treating it as an official contract is totally different thing altogether.

    I think WSDL is the indispensable API and is poised to be the universal contract design view. I am a great fan of the contract-first approach and Thinktecture's WSCF, currently in the 4th rendition, is a huge step in the right direction to realize that design implementation. It does a great job in separating the data types from the message types. I cannot wait for it to go one step further in version 5 where it will generate interfaces to host the service implementation regardless of where the implementation may be, amongst other things. Thanks Christian for listening to my pleas.

    And if you are not debating the fact that WSDL is the contract but it is the readability you are having an issue with, there are a couple of ways to make a WSDL easier to read. If you are having trouble with the brackets, then why not try the {curly} ones instead. I used CapeClear's Simplified WSDL Stylesheet to bend the angles into curls.

    And of course, some people from the other camp do share the same views as well.

    Barring any further whopping from Rich Salz on WSDL 2.0, I think some of the recommendations of WSDL 2.0 does make it slightly flatter and therefore easier on the eyes. Examples include:

    1. Removal of Message Constructs -- It is more or less redundant now with Doc/Literal in wide acceptance now and is rightfully placed in the section.
    2. Renaming of PortTypes to Interfaces and Port to Endpoint can give a Software / XML SOAP Service geek orgasms.
    3. Introducing XML Schema model XSD-type enhancements such as wsdl:import and wsdl:include does bring about certain principles of modularity, re-usability and separation of concerns.
    4. ...

     

    Friday, February 11, 2005 2:56:39 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Friday, February 04, 2005

    Bill Gates talks about Microsoft's commitment to interop

    Any questions ?
    Friday, February 04, 2005 12:43:49 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Saturday, January 29, 2005

    Time and time again, I have heard many companies and people talk about how they want to adopt XML Services and (W3C) SOAP so that they can be seen moving with the times embracing Service Orientated Architectures.

    I have always stressed that it is a lot lot more than that. Just because all your applications can emit out SOAP and angle brackets and your consuming applications can "Add Web Reference" doesnt mean your business is in the realms of Service-Orientation.

    This article here puts it very nicely. It is a lot more than what most people think and it takes a longer time to understand and embrace it fully. It is very much in the business processes and very importantly, the understanding of it...and this takes a tremendous mindset change in the business thinking and culture.

    I took the liberty to quote a few snippets out:

    • "Some of the enterprises that are deftly moving toward a service-oriented architecture to exploit the potential of Web services are confronting challenges technology can't always conquer. Users say Web services still suffer from a lack of clear metadata definitions and the need for sometimes significant IT cultural changes. "
       
    • "Trimble learned that even using Web services, it isn't possible for the company to "gracefully and quickly" integrate systems gained in several acquisitions over the past couple of years because of the metadata problems, Denis said. "There's too much fluidity around data objects, [and] we fall back into our own nomenclature and begin to define business objects," he said. "Customer definitions are the most complex challenges for us. We support very different businesses. Our customers are major accounts, channels and end users, so it is difficult to have a one-size-fits-all definition." Until industry standards for metadata management mature, the company must tackle the metadata issues outside the SOA project, he said "
       
    • "But as the project has moved forward, it has been slowed by the lack of standard metadata definitions, which define and describe applications' data, "
       
    • "Learning noted that the migration to Web services required some cultural changes along the way, such as getting customers to change their mind-set about the way they use the system."
       
    • "Denis said that the company must create its own process for managing the disparate ERP systems' metadata because of a lack of tools that can automate the operation. The complex ERP network includes packages from SAP AG, Oracle Corp. and Siebel Systems Inc., some of which were gained via acquisitions."
    Saturday, January 29, 2005 8:52:29 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  • Finally, after months of waiting, Microsoft's Enterprise Library is available for download. This is essentially Avanade's famous ACA.NET version 4 which now has a official Microsoft alignment.
    Saturday, January 29, 2005 12:59:58 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Wednesday, December 29, 2004

    Mike Champion has a great post here on the overheads of XML.

    IMHO, I think a binary representation of XML brings on a whole different set of issues, namely removing the abstraction it is supposed to represent.

    On a slightly different perspective and from a designing front, I have been advocating against the use of XML in all layers of a enterprise application esp. when tightly bound object technology is much more desired. In my presentations on SO(A), I have always preach on best using service-messaging as communication b/w applications, NOT between tiers of an application.

    However, many businesses are using XML Services as a communication mechanism JUST SO they can be seen as implementing an SO(A)... and of course, for all the wrong reasons.

    Hence, many of them complain when performance suffers.

    What do they expect when they are making verbose, chatty calls to their own Data Access Layer via SOAP ?

    Wednesday, December 29, 2004 12:23:13 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Saturday, December 11, 2004

    To slightly expand on my post and Hervey's a little bit further, there was mention on the use of Enveloped Signatures in the SOAP Headers. Enveloped Signature (as defined by XML-Digital Signature) is a signature over the XML content that contains the signature as an element. The content provides the root XML document element. Obviously, enveloped signatures must take care not to include their own value in the calculation of the SignatureValue. In other words, Enveloped Signature would sign the contents of the SOAP headers WITHOUT the signature. This is the only way a Security header can be signed without creating a circular reference dependency.

    To do the above, you are enforcing the prevention of intermediaries from modifying the SOAP Headers.

    However, as taken from the _WS-Security Specs_


    Because of the mutability of some SOAP headers, producers SHOULD NOT use the Enveloped Signature Transform defined in XML Signature. Instead, messages SHOULD explicitly include the elements to be signed. Similarly, producers SHOULD NOT use the Enveloping Signature defined in XML Signature [XMLSIG]

    WS-Security doesnt *believe* in the Enveloped Signatures because it stands on the belief that SOAP Headers are mutable. Since SOAP Headers can change and the likelihood is there that a SOAP intermediary can change the headers, an Enveloped Signature would not be as extensible and work as well.

    I am a strong believer of that. If a normal signature is used instead of an enveloped one, then an intermediary can safely add more tokens and more signatures to a Security header targeted at another node on the message path...and that is why there is WS-Security. Security cannot just be based on End-to-End scenarios or else SSL / HTTPS will suffice.

    I also further believe that an intermediary should be able to extend Security headers which are meant for other target nodes. Since a SOAP node can only process a single Security Header (because of re-ordering constraints of SOAP Headers), this option may not be as far-fetched or ridiculous as it may sound.

    Of course, anyone can still choose to implement Enveloped Signatures over their SOAP Headers if they are just implementing End-to-End scenarios and enforcing non-tampering measures over any desired or un-desired intermediaries. However, extensibility may not be an option here should intermediaries be required to offload certain processing functionality off the ultimate receiver or even add more tokens and signatures along the way.

    Saturday, December 11, 2004 2:57:32 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Thursday, December 09, 2004

    I have been noticing an increasing number of emails and newgroup threads asking for the security of Usernametokens as specified by _WS-Security Specs_ on OASIS. Most people would like to use it because it is the only alternative they have and there are no other options for using X.509 PKI Digital Certificates. Here is my personal take on it.

    I think some of the security concerns are slightly misplaced here. Firstly, I dont think WS-I or OASIS would include Usernametokens inside the WS-Security Specifications if they doubt its security. As I would like to say, --- Implementation is key.

    A Username token does NOT use any simpler or less-standard security algorithm than any other tokens. In fact, it uses the same hashing algorithm, symmetric algorithm such as the 128-key Cipher Block Chain (cbc) Advanced Encryption Standard, etc as any other token. Many people, also, do not realize that the same symmetric algorithm is used to encrypt the SOAP message body when an asymmetric X509SecurityToken is used as well. The asymmetric key algorithm is only used to encrypt the secret key that is doing the actual symmetric encryption processing. This is done for the purpose of reducing cipher bloat and increasing processing speed. The paranoia in me, however, would go for a higher-bit key implementation, which is possible.

    Remember that your secret can be stolen and kept for years and tried to be broken with much higher-end and cheaper deciphering machines in the future. OK, OK, that is my paraniod self talking.

    I believe when statements are made against the security of Usernametokens, they are made against the passwords of the Usernametokens. Therefore, the statement: "Usernametokens, on their own they are only as secure as the passwords"

    Usernametokens are as secure as your passwords. That means that if you have a good security policy on how your company treats passwords, ie...

    1. Minimum password length
    2. Implementation of alphanumeric and other different characters and symbols in password
    3. Password change frequency (in months instead of years )
    4. Elimination of Weak Passwords such as using names and such
    5. ...

    you should NOT be so fearful of using a Usernametoken in your Web Service implementation.

    On the other hand, if you don't treat or administer your passwords with good password policies, then you cannot expect Usernametokens to give your message as secure a protection as you would like.

    I would also recommend using the PasswordOption.SendNone, if possible. The hash of the password and other elements are used to produce the cipher. NO password is sent over using this enumerated option. Of course, the only caveat is except through a dictionary attack, which of course, can be made so much more difficult (or almost impossible) by having a good password policy administration system.

    If you have to send your Usernametoken over in PasswordOption.SendPlainText for whatever reasons (using Windows, LDAP Authentication or you may have hashed versions of your passwords stored in your UserDB), you SHOULD encrypt the UsernameToken with a X.509 digital certificate. Read my post here for my own implementation of it.

    Another thing to take note is one that relates to the real world and why I believe Usernametokens have its place here. It is easiest to implement and common in any business environments. Therefore, it can be plugged into any existing IT systems with relatively lesser effort. Also, X509 digital certs are usually used to authenticate machines and / or companies, it would be more expensive and unrealistic to expect every user in a 100+ user organization to have a digital cert and a private / public key pair. Therefore, I strongly believe that Usernametokens are more appt to authenticate the users themselves in the real world and will continue to be one of the most popular way to authenticate users in the near biometric-less future. However, if you are using authentication between machines, you should opt for X509 digital certs instead.


    [Author note] I believe WSE 2.0 SP2 has taken some lengths to make sure that Usernametokens which transmits a clear text password are now encrypted.

    • For security reasons, it is strongly recommended to encrypt Username tokens, especially when they contain password information. The SecurityTokenServiceClient class now automatically encrypts any UsernameToken security tokens included in outgoing SOAP requests. Similarly, the SecurityTokenService class automatically encrypts any UsernameToken security tokens included in outgoing SOAP responses.
    Thursday, December 09, 2004 1:57:26 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Saturday, November 13, 2004
    • Individual data objects as endpoints ?
    • Resources (data objects) are like children. They need to have names if they are to participate in society - Fundamental Flaw of UDDI ?
    • Any business problem can be thought of as a data resource manipulation problem and HTTP is a data resource manipulation protocol ?

    While I have been looking through to see there are any interesting materials and resources using Respresentational State Transfer (REST) to implement WEB Services in .NET (since this is WEB in every sense of the word, unlike my special inkling to call it XML Services most of the time), I came across this pretty interesting resource here.

    Friday, November 12, 2004 10:44:30 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Thursday, November 04, 2004

    While a lot of people have been caught up with the recent advancements of Web Services Standards of WS-I, OASIS, W3C (the advanced communication standard protocols, the framework, etc), that is just the tip of the Web Services iceberg.

    Once (not if) XML Web Services becomes the de-facto standards for cross domain communication for enterprise mission-critical connected distributed systems (EMCCDS) in the future, the focus will be on the IT Infrastructure. Why ? Well, someone or something has to keep it up and make sure it stays running ;) Messages of Service-Orientation are supposed to be self-contained and independent so the BUS must be there to make sure the dispatch doesnt fail.

    Infrastructure-level Web services are XML services that implement part of the distributed computing infrastructure. They support other Web services communicate with each other. In particular, these robust services (and not necessarily WEB Services in any way) are the ones that makes or breaks the communication framework. They provide such functionality as:

    • Provisioning
    • Performance Managment (Message Payload, Response Time, etc)
    • Operational Management
    • Usage Metering
    • Billing and Payments
    • Routing (XML-Aware HW Appliances such as Routers and Firewalls, SW Routers)
    • Orchestration (Workflows, Business Processes, etc)
    • Advertisement and Discovery (WS-Discovery is key)
    • Caching (Implementing different techniques and patterns for Caching, etc)
    • Queuing (For added Reliability)
    • State Management and Persistence (Transactional, 2-Phase Committ, Co-ordination, etc)

    To understand this a bit further, now with more of a federated approach to services, there will be more applications and servers to enable the use of Identity Management, Authorization, Directory and other shared services. What follows next are supporting services and application hookups. Supporting services are especially important in crowded IT environments. “Retrofitting” has to be considered to improve extensibility (afterall, much money has been sunk in to create standards-based interfaces to reusable code). This puts further weight on the current infrastructure. Connectivity is very vital here as monolithic solutions are being cleaved apart to form intelligent shared services, then cleaved together again via connections to form a logical virtual composite system.

    Very importantly, all these are very much tied to Service Level Agreements (SLAs) and Quality of Service (QoS) which will form the basis of the business returns and implications of Web Services in Service-Orientation.

    Anne Thomas Manes, one the leading thought thinkers on Web Services, has an article that talks about Infrastructure Level Web Services here.

    I am presently head-down and knee-deep working on writing a white paper that explains how XML Services and Service-Orientation will eventually empower the IT Infrastructure and some of the design considerations as we move forward to greater adoption of messaging-based solutions in the EMCCDS.

    Thursday, November 04, 2004 1:20:27 AM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions

  •  Friday, October 29, 2004
    Friday, October 29, 2004 3:13:03 PM (Malay Peninsula Standard Time, UTC+08:00)  #    Disclaimer 
  • Blog reactions