ACKed by Endi. Pushed to Master.
On Fri, 2012-01-06 at 15:22 -0500, Ade Lee wrote:
New updated patch based on feedback from Adam and Endi.
In particular -
1) Moved classes into existing servlet structure.
2) Let exceptions bubble to top level.
3) Move init code in beans to DAO objects
Please review.
Thanks,
Ade
On Wed, 2011-12-14 at 11:48 -0500, Adam Young wrote:
> On 12/13/2011 10:21 PM, Ade Lee wrote:
> > On Tue, 2011-12-13 at 18:07 -0500, Adam Young wrote:
> >> On 12/13/2011 04:55 PM, Ade Lee wrote:
> >>> Attached is some skeleton code for the new DRM interface. To build,
you
> >>> will need to download and install the candlepin-deps rpm.
> >>> (
http://repos.fedorapeople.org/repos/candlepin/candlepin/)
> >>>
> >>> We will use these rpms to build/run until we get the resteasy packages
> >>> into Fedora.
> >>>
> >>> The new classes provide the following:
> >>> 1. interface to list/get/submit key requests (for archival and
> >>> recovery).
> >>> 2. interface to recover keys
> >>> 3. interface to approve/reject key recovery requests.
> >>> 4. input/output via XML/JSON/browser.
> >>>
> >>> This is pretty much just a skeleton as not all the functionality is
> >>> currently in the DRM. There is also no authentication/authz as Jack
has
> >>> yet to work that part out. But it does look pretty much like what the
> >>> restful interface will probably look like - and the comments point
out
> >>> the parts that are missing.
> >>>
> >>> Jack - please look to see how your new code would interact with this -
> >>> and also in terms of the input/output parameters/structures.
> >>>
> >>> Endi, Adam: please look to see if the structure/ coding makes sense -
or
> >>> if it can be improved. Its not at all final - but all feedback will
> >>> help.
> >> I'd forgotten how annoying I find the Bean API.
> >>
> >> You catch too many exceptions. Let them propagate as far as you can.
> >> You should not be cathing them and then rethrowing them. Either catch
> >> them and be done with them, or let them bubble up to the top level.
> >>
> > Thats fair. I planned to handle them at the top level - so I can just
> > let them bubble up.
> >
> >> I understand the reason for splitting the DAO off from the servlet,
> >> but I feel that here the division is too arbitrary. The Resource
> >> doesn't do any work, and the fact that these are all methods of the
> >> same objects although they do the same thing seems arbitrary. I
> >> realize you are following the examples given in the documentation. I
> >> think that the Repository already playts the role of the DAO, and
> >> putting an additionaly layer in here provides no additional insulation
> >> against leaky abstractions....
> >>
> > When I originally started this, I had a bunch of private methods in the
> > resource servlet that did the work that is now encapsulated in the DAO
> > classes. I also had the idea that the RequestQueue and Repository were
> > essentally DAO objects.
> >
> > But then, I thought that I might want to code the ability to code a
> > function that would allow one to submit a recovery request that is
> > automatically processed and return the relevant key data. That would
> > then mean having logic to access both the request queue and the key
> > repository in the same servlet. In fact, more likely than not, I would
> > need to duplicate code to extract a key (using the repository) in both
> > the KeyRequest and Key resources. The alternative is for both resources
> > to use a KeyDAO object and call a common recoverKey() method.
> >
> > At that point, I started thinking about introducing DAO objects - and I
> > think the code is actually better for it.
> >
> > The distinction in the code is not arbitrary. The Resource classes show
> > exactly which interfaces are being exposed - and how they are being
> > exposed ie. the XML structure needed to communicate. They also
> > ultimately detail how error conditions and exceptions are communicated
> > to the client. My idea was that all exceptions ultimately need to be
> > handled at this level. I did this by propagating them up - but I can
> > just let them pass through too.
> >
> > And they include code needed to validate the request -- more of which is
> > likely to be needed -- prior to sending the request to a DAO object to
> > extract data.
> >
> > The reason that they appear not to do any work is because the details on
> > how the engine interacts with the repo -- have been abstracted away.
> > Also, there are things that are missing still -- logging, audit logging,
> > more request validation, details on how to parse the search parameters
> > etc.
> >
> > All in all, the resource classes are going to do a lot more, and keeping
> > them focused on the "external" interactions is not a bad thing.
>
> All good ideas. If we think 3 tiers, the resource is the interface
> tier, the DAO is the Business logic, and the Key and KeyData classes
> are the data. But we don't need one DAO per data type, and I think a
> lot of this code can be simplified to start. But IKeyRecovery in this
> case is the DAO. With the Data layer being KeyRecord. Of course, the
> code in those classes need serious cleanup.
>
> The MultiMap object should be constrained to the Interface leyer, and
> should be converted to a POJO up front. Ideally, an immutable one.
> Extract the following code into a builder.
>
>
> public RecoveryRequestData(MultivaluedMap<String, String> form) {
> keyId = form.getFirst(KEY_ID);
> requestId = form.getFirst(REQUEST_ID);
> transWrappedSessionKey = form.getFirst(TRANS_WRAPPED_SESSION_KEY);
> transWrappedPassphrase = form.getFirst(TRANS_WRAPPED_PASSPHRASE);
> }
>
> And then have the RecoveryRequestData take in the specific values it
> needs in its constructor. Note that they should not be strings, but
> instead more specific data types. It is OK for these objects to have a
> String constructor. So something like
>
>
>
>
> RecoveryRequestData(KeyId keyId, Request request, Key
> wrappedSessionKey, Passphrase wrappedPassphrase){...} You get the
> idea. If it has a string constructor and a correct toString function,
> it should be convertable.
>
> It may mean that we need to do something to tag them for marshalling. I
> fear that again the XML marshalling will force no-arg constructors.
>
>
>
>
>
> >
> >> I also really don't like the split between KeyData and KeyDataInfo.
> >> It seems unnecessary.
> >>
> > They are two very different things.
> >
> > KeyDataInfo is something that you would get back if you wanted to list
> > the keys stored. For example, I might want to list all the keys stored
> > for a particular client ID. This would provide identifiers for the keys
> > and some basic data. We will likely need to add more fields as we flesh
> > it all out.
> >
> > KeyData is the actual key (appropriately wrapped). It is what comes
> > back when a key recovery request is processed.
> >
> > Are you suggesting that they be combined - and that some fields would
> > just be empty depending on the request? That actually may not be a bad
> > idea ...
>
> We should probably reflect the structure in the DirSrv. But I can see
> the argument for wanting to manage the Info and the key itself separately.
>
>
> >
> >> Let the complexity emerge. Write the code in a more inline fashion,
> >> and only refactor out once you have a sense of what the real structure
> >> is going to be. I'll try to send you back my version of the patch
> >> tomorrow: I'm about to lose Laptop battery right now.
> >>
> > Adam - I did do that - and then started refactoring when I started
> > thinking about - for example - writing a call to automatically process
> > recovery requests.
> >
> > That said - this is a first draft. So keep the suggestions coming.
>
> Looks good, and I can see where you are coming from. I think that to
> start with, refactor to Static methods first, and don't hold on to
> state that you don't need. For example, you can make a helper function
> like this:
>
> public static IKeyRepository getKeyRepository() {
> return ((IKeyRecoveryAuthority) CMS.getSubsystem("kra"))
> .getKeyRepository();
> }
>
>
> and then something like
>
> Enumeration<IKeyRecord> e =
> KeyResource.getKeyRepository().searchKeys(filter, KeysResource.maxSize,
> KeysResource.maxTime);
>
> There is no need to hold on to references like this across method calls.
>
>
>
> I'm in meetings all day today and tomorrow, but should have more time
> to address this on Friday.
>
>
> >>> To test, you can do the following:
> >>> 1. Build the code. You will need to replace pki-common and pki-kra.
> >>> 2. pkicreate and configure a DRM. The needed changes to web.xml
should
> >>> be in the new instance.
> >>> 3. Add links to the following jars in webapps/lib/. They should all
be
> >>> in /usr/share/candlepin/lib
> >>> --> javassist, jaxrs-api, jettison, resteasy-jaxb-provider,
> >>> resteasy-jaxrs, resteasy-jettison-provider, scannotation
> >>> 4. Archive some keys by doing enrollments with your CA or TPS.
> >>>
> >>> You could also set up the DRM instance to be controlled by eclipse in
> >>> the same way that we do for the CA instance. If you do this, you will
> >>> be able to step through the code with the debugger.
> >>>
> >>> You should be able to see archived enrollments/ recoveries by going to
:
> >>>
https://hostname:port/kra/pki/keyrequests
> >>>
https://hostname:port/kra/pki/keyrequest/1
> >>>
> >>> using a browser, or using curl to get xml or json.
> >>>
> >>> Ade
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> Pki-devel mailing list
> >>> Pki-devel(a)redhat.com
> >>>
https://www.redhat.com/mailman/listinfo/pki-devel
> >> _______________________________________________
> >> Pki-devel mailing list
> >> Pki-devel(a)redhat.com
> >>
https://www.redhat.com/mailman/listinfo/pki-devel
> >
>
_______________________________________________
Pki-devel mailing list
Pki-devel(a)redhat.com
https://www.redhat.com/mailman/listinfo/pki-devel