However, if views are only created transiently in response to user actions (2), this might result in simpler code. Adding another hierarchy of views for event handling/gesture recognition would add more syncronization code. The CALayer rendering hierarchy already has to be syncronized with the data model. I'm discussing the app presented in the video (Keynote), not my own app. I certainly don't presume that the gesture recognizer's target is the view. I'm using the term "event handling" generically to refer to handling of what the user does. Unfortunatley I can't edit my original post. By "accomplish this" I mean to do what they did, not to improve upon it by sharing more code. Thanks for your reply! A colleague pointed out that my original wording is ambiguous. What conveniences are you giving up when putting the action methods in view controllers rather than views? It's not obvious to me that there's any downside, in general. > essentially giving up on using views and their conveniences If you do this carefully, you can have one source file for each subclass, each in the relevant target, using the same subclass name, and then use a single source file for the subsubclass, in both targets. Even in a cross-platform app, it's feasible (both for views and view controllers) to subclass once per platform - to add functions that support the shared behavior but have to be implemented in terms of the specific platform - and to subclass again to provide the shared behavior. I would say that it's currently usual to avoid putting business logic in views (that is, not to subclass NSView or UIView except for customized drawing), and to put the business logic in the view controller (which typically needs subclassing for other reasons), but there's no absolute rule about it. It depends on your app design what object might serve the purpose best. The gesture recognizer must be attached to a view, but its target can be any convenient object in the responder chain. I'm not sure what's at stake with your concern over synchronization. Certainly, the target-action pattern uses the responder chain, so it's not entirely inappropriate to think of "event handling", but the point is that the gesture recognizer target is a separate property from its view, so there's no real presumption that the target "ought" to be a view. Gesture recognizers use the target-action pattern to deliver their results. If you're primarily considering gesture recognizers, then I think it's not quite idiomatic to think of this as "event handling", which normally refers to routing of UIEvent/NSEvent objects. Thoughts? Anyone with experience doing this? Handle all interaction logic in the controller, essentially giving up on using views and their conveniences, but not having to keep hierarchies in sync.I'm not sure how feasible this is, but a touch (or click) would be hit-tested against the rendering hierarchy, and an appropriate view constructed for the duration of the interaction. Instantiate views and gesture recognizers on-demand when a user interacts with an element of the document.It seems though that keeping this hierarchy in sync with the rendering hierarchy would be somewhat tedious. This would essentially replicate the view heierarchy one would build were the app not cross-platform. Build a UIView (or NSView on macOS) hierarchy with appropriate event handling logic (gesture recognizers) for each element on the canvas. #AUDULUS NO TOOLBAR IOS CODE#Shared code was used for document rendering (based on Core Animation / Core Graphics), but (presumably) not for event handling. #AUDULUS NO TOOLBAR IOS HOW TO#However, they seemed to gloss over how to structure event handling for the canvas (the main document view, see the video). I was watching the WWDC video "Sharing code between iOS and OS X" which offers great tips on cross-platform development.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |