Blog Archives

Cocoa Debugging Tip [Tod Cunningham] (iDevBlogADay)

Posted by Tod Cunningham at iDevBlogADay

I’m attending my local Ann Arbor CocoaHeads meeting tomorrow, and the topic is
Objective Tips.  So I thought I would share a tip.

What do you do when you get a crash due to an uncaught exception such as:

2013-03-13 13:30:10.186 Picross[43233:1303] *** Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ‘-[AppDelegate crash]: unrecognized selector sent to instance 0xc04de10?*** First throw call stack:

(0x355b012 0x32ffe7e 0x35e64bd 0x354abbc 0x354a94e 0×3313663 0x12f54 0x3f0153f 0x3f13014 0x3f042e8 0x3f04450 0x926b5e12 0x9269dcca)
libc++abi.dylib: terminate called throwing an exception

This can get really frustrating as you need to figure out where in your code it crashed.  Debugger to the rescue.  In the call stack, you can find the first “low” value.  This usually represents your code.  Then you just do a symbol lookup on that value.  Such as the following when using LLDB.

im loo -a 0x12f54

 This does a image lookup which gives a nice dump including the file and line number of the offending code:

Address: Picross[0x00012f54] (Picross.__TEXT.__text + 67540)
Summary: Picross`__57-[AppDelegate application:didFinishLaunchingWithOptions:]_block_invoke112 + 52 at AppDelegate.m:216

You can see from this dump that the offending code was at line 216 in the AppDelegate.m and was from a block where I was calling a selector that didn’t exist.

I also find this GDB to LLDB guide to be a handy reference of the commands available in LLDB.

I hope this short quick tip was helpful.  Please feel free to follow me on twitter at @fivelakesstudio. I would love to hear about your experiences with the debugger or any tips you might have.


How to compress, encode and write data to XML with zlib, base64 and xswi [Steffen Itterheim] (iDevBlogADay)

Posted by Steffen Itterheim at iDevBlogADay

In order to write Tiled’s TMX file format I needed to do exactly this: figure out how to compress data, encode it as a string, and write it to XML. I wrote down what I learned from using zlib, base64 and xswi – XML Stream writer for iOS (a single Objective-C class) while writing KoboldTouch‘s […]

Tips for Mobile Devs – Image Magick Command Line [Jake Gundersen] (iDevBlogADay)

Posted by Jake Gundersen at iDevBlogADay

When I first began contracting I stumbled on to a tool that I now use all the time, Image Magick. Image Magick is a C library that can be used in all kinds of ways, including inside your iOS app if you want. But, I use it as a command line tool to manipulate image assets.

While there are lots of alternatives, most of which have a more friendly GUI interface, nothing beats the flexibility of the command line. If you aren’t friends with the command line, don’t worry, I’m not either. But, what you can do with Image Magick makes it worth it.

A quick example. You have 1024×1024 pixel icon file, and you want to output all the icon sizes for a universal app. You could do it with the following command line script (your original file is called ‘bigIcon.png’ and is in the same directory):

 mkdir output cp bigIcon.png output/iTunesArtwork@2x convert bigIcon.png -resize 57x57 output/Icon.png convert bigIcon.png -resize 114x114 output/Icon@2x.png convert bigIcon.png -resize 72x72 output/Icon-72.png convert bigIcon.png -resize 29x29 output/Icon-Small.png convert bigIcon.png -resize 50x50 output/Icon-Small-50.png convert bigIcon.png -resize 58x58 output/Icon-Small@2x.png convert bigIcon.png -resize 50% output/iTunesArtwork 

The first two lines create and output directory and then copy the bigIcon.png file into that directory renaming it to ‘iTunesArtwork@2x’. All the subsequent lines use ImageMagick’s convert tool to resize them to a specific size and put the output file, named appropriately, into the output folder. Take a look at that last line, instead of supplying a size, you can supply a ratio, in percent format, and it will do the math (1024×1025 * .5 = 512×512) for you.

You could supply two percentages, one for each dimension ‘-resize 50%x40%’. You can give it instructions on retaining aspect ratios. There are lots more options with the resize tool.

This script can be put into a shell script, or copy and pasted onto the command line, and it will be done before you can even finish loading photoshop.

Speed is nice, but that’s not really the point. The real boon is the flexibility. What if your designer send you a series of images in .psd format and you need them all in .png format? What if you need the name of the output .png to be named the same as the layer name? What if you need it to create a retina and non retina set of those images? You can do all that with ImageMagick.

You can also perform all manner of image processing using the convert tool. You can blur images with a gaussian blur, you can do color alterations, you can crop or enlarge canvas of images, you can combine multiple images into a collage, you can blend images, there aren’t many things you can’t do with it.

Check out the convert command line tool page for a list of options and their usage here. Also, Fred Weinhaus has put together a huge set of scripts using Image Magick to do more advanced image processing (things like edge detection, pixellization, cartoon filter, etc).

Installing Image Magick

There are lots of ways to install Image Magick, my personal favorite is to use Homebrew. It seems to me to be the easiest and cleanest. You can find out how to install Homebrew here. Once you have Homebrew installed, installing Image Magick is easy:

 brew install imagemagick 

If that gives you trouble, here’s a thread on SO talking about it.

There are lots of other ways to install Image Magick, the most direct is to just download the binary and put it in a directory (anywhere). In that case you’ll need to set a handful of environment variables to use it. I used to do it this way, but brew is easier (and has always worked for me).

A couple more scripts

Earlier I mentioned that you could convert psd files to png files and even inspect the psd file to get the layer names. Here are a couple of examples of that. First, just convert all the psd files in a directory to png files (and put them in an ‘output’ folder):

 for f in *.psd do myStem=`echo $f | sed 's/.psd//'` convert $f[0] output/$myStem.png done 

The first line starts a for loop, iterating once for each .psd file in the current directory. For and do are how the loop is constructed. The variable ‘f’ now contains the name of the psd file that the current loop iteration is working on. To access ‘f’ you must use ‘$f’.

The second line sets the variable ‘myStem’ to the file name without the .psd extension. ‘echo $f’ outputs the file name, the ‘|’ operator pushes that file name to the ‘sed ‘s/.psd//’ command, which does a search and replace (search for .psd, replace it with nothing).

Finally, the convert line uses the $f (the whole filename) with a [0] behind it. I’ll explain what [0] does in just a second. The last part of the line output/$myStem.png creates a filename in the output directory with a file extension of .png and the name of the original .psd file. ImageMagick knows from the extension which file format to create.

In Image Magick if you are dealing with .psd files and layers, [0] is the root and represents all the layers. If you put [1] you’d get the first layer. If you use convert on a .psd file without [x] it will iterate through all the layers, giving you a numbered file output for each layer, and an additional one for all the layers. It will automatically crop all these layers to their pixel bounds.

You can also rewrite this so it’s one line, like this:

 for f in *.psd; do stem=`echo $f | sed 's/.psd//'`;convert $f[0] output/$stem.png; done 

Finally, I promised that you could output those .psd layers, using the names of the layers as filenames. Here’s what that would look like:

 num=`convert test.psd -format "%[scenes]" info: | head -n 1` for ((i=1;i<$num;i++)) do myStem=`convert test.psd[$i] -verbose info: | grep "label:" | cut -d: -f 2 | cut -d: -f 2` convert test.psd[$i] $myStem.png done 

The first line gets the number of layers in the psd file (called ‘test.psd’). I can’t take credit for this script, nor can I completely explain it, because I got it from the Image Magick forums. The good news is, you can do the same thing, google what you need to do with Image Magick and you can usually find a script that will help you.

As I said earlier, I’m not really a command line guy. Beauty is, you don’t need to be :)

The ‘-format “%[scenes]” info:’ command retrieves the metadata and the ‘| head -n 1? piece reduces it to a single value (try it without the last chunk and see what you get). This places the number of layers, including the [0] layer (which is the root, not an actual distinct layer) into the num variable.

Next the for line constructs a loop which starts at 1 (instead of 0) and goes to num-1 (index of the last number, indexes are one less than the number, starting at 0). This way you skip the root and only convert the individual layers. ‘$i’ will now contain the index of the current layer.

The next line gets the layer name. ‘Convert test.psd[$i] -verbose info:’ extracts a bunch of metadata from the .psd file. Then that is piped into the grep command which will search the whole thing and retrieve the ‘label:’ string. The last two pipes just cut off any extra, unnecessary characters, leaving you with the name of that layer. It’s held in the myStem variable.

Finally, the convert command is invoked on the current layer, using the $myStem variable to name the output file.

If you are put off by the bash scripting, don’t be. You don’t have to be a master to get good use out of it. Here’s just a few quick things. The weird single quote, ‘`’ character (below the tilde) is necessary when you are putting the result of a command into a variable. The ‘|’ character takes the result of one command and ‘pipes’ it into another. You can create loops with for . . . do . . . done. Set a variable by typing it’s name, but access it with the ‘$’ character in front of it.

If you can remember those things, you should be able to google the rest, like specific command functions (cut, grep, sed, etc).

I’ve only barely scratched the surface of what Image Magick can do. If you have other uses for mobile developers, put em in the comments!

Agile By Fire [Mark Granoff] (iDevBlogADay)

Posted by Mark Granoff at iDevBlogADay

One of the features of my new day job is that the development environment is full-on Agile. It’s my first experience in such an environment, and while I’ve worked in plenty of places that used bits and pieces of Agile, an “all Agile all the time” development model is quite eye opening. I’ve had to literally learn and adapt to Agile on-the-fly — by fire, if you will.

For the uninitiated or inexperienced developer, working in an Agile environment might feel like being micro-managed. We have a scrum meeting every day. We talk about every issue in the sprint… every day. And there’s a good chance you’ll get asked later in the day if you’re going to be done on time with your issues. Coming from a smaller company where we (in hindsight) pretended to be agile, all the meeting time seems like a lot of overhead to invest when I’d rather be coding. But after a month on the job, I can see that this methodology works to achieve several interesting goals, including accountability, predictability, and organizational growth.

Accountability

At every scrum every issue in the sprint is reviewed. The assigned developer must give a status and report if the work is on track or blocked. And if blocked, what the impediment is. This keeps the scrum master in the loop and aware of what may or may not prevent him (or her) from keeping his promises to his boss about deliverables. But it also keeps each developer accountable for their deliverables. There is no time lost, therefore, wondering when something needs to be done: It’s in the sprint. The developer committed to having it done. The whole team knows what he’s doing and what he committed to.

Predictability

Before a sprint starts, the team agrees to what work will be in the sprint, and when it will be completed (in 2 weeks, say). Usually you want to be able to say, at the end of a sprint, that you accomplished “X”. It’s important that a sprint be achievable, otherwise, you never deliver what you said you’d deliver (for the sprint). And the more achievable a sprint’s tasks, the more predictable your engineering efforts can be.

My first complete sprint comes to an end today. It was a little longer than normal because of the holidays. However, there was the right amount of work in the sprint to keep everyone busy while also allowing everyone to achieve their goals and complete the work on time. That alone is a nice feeling to run with into the next sprint.

Organizational Growth

At daily scrum meetings, and after a sprint completes, the pace and status of the work in the sprint is constantly evaluated to ensure that missteps are not made. Or, if they were, how to avoid similar missteps in the future. By analyzing our performance in real-time, we grow both as individual contributors and as a team. And ultimately we become a more productive part of the company.

I do not profess to be an Agile expert. In fact, I only know from what I’ve experienced to this point. To be sure, Agile takes some getting used to. I can already see, however, how it can be a beneficial methodology for development groups, especially if you open your mind to the possibilities it affords.

Scaling Cocos2D Node Positions with Display Resolution [Steffen Itterheim] (iDevBlogADay)

Posted by Steffen Itterheim at iDevBlogADay

Here’s a quick tip on how to design your scenes so that they scale up to higher resolution displays. For example when your app runs on a widescreen iPhone / iPod touch or on an iPad. This article is not about Retina displays, which use the same coordinate system and merely display higher resolution images. […]

Dynamic Pattern Images with Core Graphics [Mark Granoff] (iDevBlogADay)

Posted by Mark Granoff at iDevBlogADay

The image handling capabilities available in iOS (and OS X for that matter) are pretty spectacular. Using only high level APIs, you can work with images quite easily. But lurking very close to the high level APIs is Core Graphics, where in the real power lies.

Background

With the recent release of iOS6 and more importantly the iPhone 5, I began updating one of my apps to take advantage of the large screen. This immediately presented some challenges, because now I had to support two screen sizes instead of one. Not a big deal, but in the case of this particular app, it uses a full-screen image overlay to provide a crucial visual effect.

The obvious solution (although not the best solution) was simply to provide two additional full-screen images for the iPhone 5 (one at 320×568, one at 640×1136). Those were easily created, but presented a couple new issues:

  1. The app would now need to conditionally select the correct overlay image based on screen size, and
  2. The layout of the pattern on the overlay didn’t “break” nicely where I needed it to break (the repeated image was cut in half in some cases), and
  3. Adding more full-screen overlay images was only going to add to the size of the app, which is already large.

What to do?

Pattern images to the rescue!

I immediately considered creating images that represented a single copy of the pattern I wanted to use and including those in the app bundle. That would then allow me to write code like this:

 myView.backgroundColor = [UIColor colorWithPatternImage:overlayPattern]; 

That works and solves (1) and (3), but not (2). The solution to (2) really requires creating a pattern image dynamically that, when repeated, fills the view without any partial patterns appearing.

Enter Core Graphics

Core Graphics is incredibly powerful and I will not attempt to cover it all here. Heck, I don’t even purport to be a Core Graphics expert. But I did figure out how to do something that is pretty useful in creating images dynamically for use as pattern images.

To start, we have to understand something basic about Core Graphics: Everything you do requires a context, and there are at least a few different kinds of contexts you can work with. For our purposes, we need a bitmap context. We’ll draw in this context, and then ask Core Graphics to give us something we can work with more easily: a UIImage object.

Creating a bitmap context requires one C-style API call. (Yes, Core Graphics at this level uses all C style APIs.) I wrote a wrapper function:

 CGContextRef CreateBitmapContext(int width, int height, CGColorSpaceRef colorSpace, CGImageAlphaInfo alpha) {     CGContextRef    context = NULL;     int bitmapBytesPerRow   = (width * 4);      context = CGBitmapContextCreate (NULL, //bitmapData                                      width,                                      height,                                      8,      // bits per component                                      bitmapBytesPerRow,                                      colorSpace,                                      alpha);      return context; } 

And I call it this way:

     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();     CGContextRef myBitmapContext = CreateBitmapContext(myWidth, myHeight, colorSpace, kCGImageAlphaPremultipliedLast);     CGColorSpaceRelease(colorSpace); 

The arguments myWidth and myHeight are the size of the image I ultimately want, so we need a context with a matching size. The color space and alpha arguments come into play later, but to draw in color, you need both an RGB color space and an alpha channel.

Now we can work with the context. We can fill it with a solid color:

     // Fill the square with black.     CGContextSetRGBFillColor (myBitmapContext, 0, 0, 0, 1);     CGContextFillRect (myBitmapContext, CGRectMake (0, 0, myWidth, myHeight ));     CGImageRef image = CGBitmapContextCreateImage (myBitmapContext);     CGContextRelease (myBitmapContext); 

And we can draw in it (with a different color):

     // Set the fill color to white     CGContextSetRGBFillColor(myBitmapContext, 1, 1, 1, 1);     // Draw a diamond     CGContextBeginPath(myBitmapContext);     CGContextMoveToPoint(myBitmapContext, myWidth/2.0, 2);     CGContextAddLineToPoint(myBitmapContext, myWidth-2.0, myHeight/2.0);     CGContextAddLineToPoint(myBitmapContext, myWidth/2.0, myHeight-2.0);     CGContextAddLineToPoint(myBitmapContext, 2, myHeight/2.0);     CGContextAddLineToPoint(myBitmapContext, myWidth/2.0, 2);     CGContextClosePath(myBitmapContext);     // Fill the diamond     CGContextFillPath(myBitmapContext); 

I’ve put all this together in a very short demo project, available on Github. On an iPhone 4, it creates the following screen:

Nifty, eh?

Next time, I’ll expand this project to create patterns with holes in them using image masks.

Create Every iOS Icon For Your App With a Photoshop Script [Josh Jones] (iDevBlogADay)

Posted by Josh Jones at iDevBlogADay

When you create any kind of app that will go on Apple’s App Store, they require you to have an icon for your app. Well because of the number of different devices and resolutions out there, you’ll need multiple copies of this icon before submission. This script can do that for you in a few seconds!

What Are the Icons For?

Here is a chart that explains the current icons:

File Name Size Device(s) Purpose
Icon.png 57 x 57 iPhone/iPod Touch Home Screen App Icon
Icon@2x.png 114 x 114 iPhone/iPod Touch
(Retina)
Home Screen App Icon
Icon-72.png 72 x 72 iPad Home Screen App Icon
Icon-72@2x.png 144 x 144 iPad (Retina) Home Screen App Icon
Icon-Small.png 29 x 29 iPhone/iPod Touch, iPad Spotlight Search Results
and Settings
(Settings for iPad)
Icon-Small@2x.png 58 x 58 iPhone/iPod Touch, iPad
(Retina)
Spotlight Search Results
and Settings
(Settings for iPad)
Icon-Small-50.png 50 x 50 iPad Spotlight Search Results
Icon-Small-50@2x.png 100 x 100 iPad (Retina) Spotlight Search Results
ItunesArtwork
(no extension)
512 x 512 iPhone/iPod Touch/iPad App Store
ItunesArtwork@2x
(no extension)
1024 x 1024 iPhone/iPod Touch/iPad App Store (Retina)

 

Requirements

In order to do this you’re going to need:

  • Photoshop
  • A square PNG image 1024×1024 or greater
  • The script itself which can be downloaded from here.

Usage

The script itself it just a text file with a funky extension that Adobe products understand. To use this in Photoshop you’re going to want to place it in the Photoshop scripts directory.

On Mac that’s usually located: Applications/Adobe Photoshop CS6/Presets/Scripts (Or whatever your version of Photoshop is)

Mac OS Photoshop Script Location

On Windows that’s usually located at: C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)\Presets\Scripts

Windows Photoshop Script Location

One you place the file here, load up Photoshop and look under “File -> Scripts” and you should see “Create iOS Icons”

Here is a video showing how it works in action:

Final Notes

The original script was created by Matt Di Pasquale who shared it on GitHub 2 years ago. I added some extra checks, a couple missing icons, and the ability to output to a file.

If you find this script useful, please let me know! Feel free to “fork” it, as they say in GitHub land. I’m sure this could also be automated to be part of the Xcode build process as well, but I’ll leave that as a project for another day. ;)

Fast UITableView Scrolling with Network Image Load [Mark Granoff] (iDevBlogADay)

Posted by Mark Granoff at iDevBlogADay

This is an old and common thing to do in an iOS app. Doing it well — or at all — it turns out, is pretty easy, but perhaps not obvious.

The Problem

You have a table view with table cells that each contain an image you retrieve from the internet. The problem is that for every image load, the table scrolling stalls or becomes jerky while the image is loaded because that load is happening on the main UI thread.

Result: Poor user experience.

First Solution

If you’re a developer with any experience or formal computer science training, the first solution that might come to mind is to load images in the background. That’s easy enough, but likely requires a custom UITableViewCell. In your custom cell implementation, you could write a method (called from tableView:cellForRowAtIndexPath: on the cell object) to load the image. The implementation of that method can then create a Grand Central Dispatch block or NSInvocationOperation or something to go get the image for the cell in the background.

This solution works a little better, but has a few problems. First, while now you have images loading in the background, you are still loading every image for every cell that is requested. This is bad in a “fast scrolling” situation, where someone has flicked your table in one direction very fast. Cells are appearing and disappearing much faster than their images can be fetched and displayed. Many GCD blocks or NSInvocationOperations are being queued, and executed! The effect is that the CPU is still spending a lot of time loading images that, now sadly, aren’t needed (because the cell for which the image was fetched has gone off screen (so to speak)), so the table scrolling is still jerky (albeit, probably less so). Worse, however, because table cells get re-used, you are likely to see a veritable slide show of images appear in the cells that ultimately do get displayed (once the table stops scrolling.)

Result: Probably a better user experience from a performance perspective, but still a less than optimal visual user experience as the images fill in and change in the ultimately visible table cells.

First Solution Tweak

The first solution isn’t entirely bad, per se. It takes advantage of background processing to accomplish tasks that would otherwise delay the main thread. Excellent! But other issues are then revealed through the slide-show appearance of images in the final cells and the still-jerky performance of the UI.

The slide show effect boils down to the fact that cells are re-used, but the background tasks initiated for each new use of a cell do not know when this happens. So a tweak to this solution is to add to the custom cell object a sequence number. When the cell is dequeued for (re-)use, it gets a new sequence number (taken from a global, monotonically increasing variable). When the background image load is occurring, the sequence number in the cell can be compared with the sequence number recorded when the image load was initiated. If they are different, the actual display of the image can be skipped.

Result: Only the last few cells dequeued or created and presumably visible on screen have their images actually displayed. However, every cell ever requested still has a background task initiated to load the image. So the CPU problem remains.

Last Solution

The solutions to this point have not been entirely off base. It is “a good thing” to load images for table cells in the background (or better, from a local cache if you loaded it once before), especially if they are coming off the internet because you cannot be assured that the user has a fast or good connection.

The actual thing to do, which makes so much sense it’s a wonder why people don’t think of this straight away (I didn’t!), is to only load images (for any cells) if the table view is not decelerating! That is to say, if the table view is scrolling, don’t load any cell images. And only when the table view stops scrolling (or rather, is not decelerating), load the images for the then visible cells.

So simple! How do we do it?

Remember that a UITableView is actually a descendent of UIScrollView. So it responds to the same UIScrollViewDelegate methods as would a first-class UIScrollView and has available to it the same properties you would expect to find on a UIScrollView.

First, in your implementation of tableView:cellForRowAtIndexPath:, you should include something like:

     ...     if (!tableView.decelerating) {         NSString *url = ...;         [cell showImageURL:url];     }     ... 

The idea here is that on the initial display of your table, it will not be moving, so load the image (in the background, presumably). But during a fast scrolling scenario, or when the table is moving at all, image load will be skipped.

Next, you need to implement one of the UIScrollViewDelegate methods, something like this:

 -(void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView {     NSArray *visibleCells = [self.tableView visibleCells];     [visibleCells enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {         MyTableViewCell *cell = (MyTableViewCell *)obj;         NSString *url = ...;         [cell showImageURL:url];     }]; } 

When the table stop moving completely, this method is called automatically for you. As you can see, only the visible table cells have their images loaded.

Result: Awesome user experience (your app performs really fast) and your app only does as much work as is needed to create that experience (loads images for only visible cells).

So simple! Yet, not necessarily obvious. But certainly another example of how Apple has thought of everything when it comes to the iOS SDK.

Custom cocos2d Action for rotating a sprite around an arbitrary point [Toni Sala] (iDevBlogADay)

Posted by Toni Sala at iDevBlogADay

Source: Custom cocos2d Action for rotating a sprite around an arbitrary pointIndie Dev Stories

Cocos2d is and excellent framework. It has saved me tons of time during my game projects. It offers almost everything I need. However, sometimes there are some features that are not supported by cocos2d.

This is the case of rotating a sprite around an arbitrary point. Rotation in cocos2d is based on the concept of anchor point. This is ok on the 99% of situations, probably. However, during the development of Muster my Monsters I need to perform rotations around arbitrary points. The idea is to have an sprite “orbiting” around another sprite or some defined point in the space.

cocos2d rotation around arbitrary point

Actually, you can achieve it using the concept of anchor points. You could define an anchor point that is out of the content size of the sprite (see the last example on this article). However, anchor points in cocos2d are normalized, so you need to figure out how to map your centre rotation point to normalized coordinates (from 0 to 1). This is far from being intuitive.

So, I decided to type some code to implement this functionality.

Rotating a point around another point

The general formula for rotating a point around another arbitrary point is the following:

p'x = cos(theta) * (px-ox) - sin(theta) * (py-oy) + ox p'y = sin(theta) * (px-ox) + cos(theta) * (py-oy) + oy

Based on this simple formula we will create the following cocos2d extensions.

An objective-c category to rotate CCNodes around an arbitrary point

It is useful to extend the CCNode functionality to allow for this kind of rotation. But instead of subclassing CCNode I think it is better to create a category for it.

This is the header:

 @interface CCNode (RotationAround)  /**  Rotates a CCNode object to a certain angle around  a certain rotation point by modifying it's rotation  attribute and position.  */ -(void) rotateAroundPoint:(CGPoint)rotationPoint angle:(CGFloat)angle;  @end 

And here we have the implementation:

 #import "CCNode+RotationAround.h"  @implementation CCNode (RotationAround)  //p'x = cos(theta) * (px-ox) - sin(theta) * (py-oy) + ox //p'y = sin(theta) * (px-ox) + cos(theta) * (py-oy) + oy  -(void) rotateAroundPoint:(CGPoint)rotationPoint angle:(CGFloat)angle {     CGFloat x = cos(CC_DEGREES_TO_RADIANS(-angle)) * (self.position.x-rotationPoint.x) - sin(CC_DEGREES_TO_RADIANS(-angle)) * (self.position.y-rotationPoint.y) + rotationPoint.x;     CGFloat y = sin(CC_DEGREES_TO_RADIANS(-angle)) * (self.position.x-rotationPoint.x) + cos(CC_DEGREES_TO_RADIANS(-angle)) * (self.position.y-rotationPoint.y) + rotationPoint.y;      self.position = ccp(x, y);     self.rotation = angle; }  @end 

Take into account that the sin and cos functions operate in radians while cocos2d uses degrees. This is why I use the CC_DEGREES_TO_RADIANS() macro.

Very easy and handy ;)

A CCAction to rotate a CCNode around an arbitrary point

But one of the greatest features of cocos2d is the Actions system. So, having and action to animate a sprite around an arbitrary point is very powerful. Here you have the header file:

 #import <Foundation/Foundation.h> #import "cocos2d.h"  /**  Rotates a CCNode object to a certain angle around  a certain rotation point by modifying it's rotation  attribute and position.  The direction will be decided by the shortest angle.  */ @interface CCRotateAroundTo : CCRotateTo {     CGPoint rotationPoint_; 	CGPoint startPosition_; }  /** creates the action */ +(id) actionWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint; /** initializes the action */ -(id) initWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint;  @end  /** Rotates a CCNode object clockwise around a certain  rotation point a number of degrees by modiying its  rotation attribute and position.  */ @interface CCRotateAroundBy : CCRotateBy {     CGPoint rotationPoint_; 	CGPoint startPosition_; }  /** creates the action */ +(id) actionWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint; /** initializes the action */ -(id) initWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint;  @end 

And the implementation file:

 #import "CCRotateAround.h"  //p'x = cos(theta) * (px-ox) - sin(theta) * (py-oy) + ox //p'y = sin(theta) * (px-ox) + cos(theta) * (py-oy) + oy  @implementation CCRotateAroundTo  +(id) actionWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint { 	return [[[self alloc] initWithDuration:t angle:a rotationPoint:rotationPoint] autorelease]; }  -(id) initWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint { 	if( (self=[super initWithDuration: t angle: a]) )     {         rotationPoint_ =  rotationPoint;     }  	return self; }  -(void) startWithTarget:(CCNode *)aTarget { 	[super startWithTarget:aTarget]; 	startPosition_ = [(CCNode*)target_ position]; }  -(void) update: (ccTime) t {      CGFloat x = cos(CC_DEGREES_TO_RADIANS(-diffAngle_*t)) * ((startPosition_.x)-rotationPoint_.x) - sin(CC_DEGREES_TO_RADIANS(-diffAngle_*t)) * ((startPosition_.y)-rotationPoint_.y) + rotationPoint_.x;     CGFloat y = sin(CC_DEGREES_TO_RADIANS(-diffAngle_*t)) * ((startPosition_.x)-rotationPoint_.x) + cos(CC_DEGREES_TO_RADIANS(-diffAngle_*t)) * ((startPosition_.y)-rotationPoint_.y) + rotationPoint_.y;      [target_ setPosition:ccp(x, y)]; 	[target_ setRotation: (startAngle_ + diffAngle_ * t )]; }  @end  @implementation CCRotateAroundBy  +(id) actionWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint { 	return [[[self alloc] initWithDuration:t angle:a rotationPoint:rotationPoint] autorelease]; }  -(id) initWithDuration: (ccTime) t angle:(float) a rotationPoint:(CGPoint) rotationPoint { 	if( (self=[super initWithDuration: t angle: a]) )     {         rotationPoint_ =  rotationPoint;     }  	return self; }  -(void) startWithTarget:(CCNode *)aTarget { 	[super startWithTarget:aTarget]; 	startPosition_ = [(CCNode*)target_ position]; }  -(void) update: (ccTime) t {     CGFloat x = cos(CC_DEGREES_TO_RADIANS(-angle_*t)) * ((startPosition_.x)-rotationPoint_.x) - sin(CC_DEGREES_TO_RADIANS(-angle_*t)) * ((startPosition_.y)-rotationPoint_.y) + rotationPoint_.x;     CGFloat y = sin(CC_DEGREES_TO_RADIANS(-angle_*t)) * ((startPosition_.x)-rotationPoint_.x) + cos(CC_DEGREES_TO_RADIANS(-angle_*t)) * ((startPosition_.y)-rotationPoint_.y) + rotationPoint_.y;      [target_ setPosition:ccp(x, y)]; 	[target_ setRotation: (startAngle_ + angle_ * t )]; }  @end 

Here we have the typical cocos2d two versions of the CCRotation action: CCRotateAroundTo and CCRotateAroundBy. Here you have a usage example:

 CCRotateAroundBy *rotateAround = [CCRotateAroundBy actionWithDuration:1.0 angle:90 rotationPoint:screenCenter]; [sprite runAction:rotateAround]; 

As usual, very easy to use “:^]

Conclusion

It is indeed a very simple feature to implement but it was not initially included to cocos2d. So, here you have it! Enjoy!

HTH!

Source: Custom cocos2d Action for rotating a sprite around an arbitrary pointIndie Dev Stories
Indie Dev Stories – Stories from an Independent Games Developer

Hype – How To Play [Tod Cunningham] (iDevBlogADay)

Posted by Tod Cunningham at iDevBlogADay

Ken and I have been struggling with the problem of how to teach people to play our games.  Most of our games are fairly niche, and we would like to broaden their appeal to people who might not know how to play.

We wanted something that would accomplish the following goals:

  • Quickly teach the players the basics of game play
  • Look integrated into the app
  • No download or streamed content
  • Measurable
  • We don’t want to inflate the App size too much
  • Something we can create without too much custom programming

What we ended up with was a how to play integrated tutorial that was implemented with an imbedded WebView that plays back HTML 5 generated via Hype.  That was a mouth full.  Good thing we have a video showing it in action:

Hype

Using Hype, you can create HTML 5 web content with animations and interactive content.  The best part is it creates really small output as it just requires the art assets and some generated javascript to control the animations.

There are just a view basic types of object you can use in Hype such as Box, Text, Button, …

I wish Hype had Arrows and other nicer callout objects build into it.  However between the art assets I already had and using Snagit for the other assets, I was able to put together what I needed.

I was actually able to simulate gameplay just by using these basic Hype elements and its Key framing capability.  It only took one evening to finish the Hype project.  I thought about using something like Camtasia to record a video of some of these parts as opposed to rolling the play animations by hand.  However, I wanted the output really small and it wasn’t that hard to simulate the effects I needed.  However, given you can embed video in Hype it would be interesting to try it with embedded video clips.

HTML

You can easily generate the HTML assets for the Hype project.  It just produces a simple “main” html file and a folder containing the Javascript, Images, and other resources needed to run the project.

Given this is just HTML, you can even upload it to a website and interact with it directly in a browser:

&amp;amp;lt;p&amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;br /&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;lt;/p&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt;&amp;amp;lt;/p&amp;amp;gt;
We just took these generated assets and added them to our iOS project so they would be built into the resource bundle.  If you do add it to your project, be sure to include it as a folder reference so xcode will preserve the folder layout in the resource folder.  Otherwise, it will get flattened into a single folder which could cause issues.

Loading WebView From Resource

You load resource based files into the WebView just like you would load any local file.  Here is some example code that loads the above example:
    NSString     *howToPlayDevice = [FLUtil iPad] ? @”iPad” : @””;    NSString     *resourcePath    = [[NSBundle mainBundle] resourcePath];    NSString     *howToPlayPath   = [NSString stringWithFormat:@”%@/html/HowToPlay%@.html”,                                               resourcePath, howToPlayDevice];    NSURL        *url             = [NSURL fileURLWithPath:howToPlayPath];    NSURLRequest *requestObj      = [NSURLRequest requestWithURL:url];    [self.howToPlayWebView loadRequest:requestObj];

One final tip, make sure the canvas in Hype is the same size as the WebView so it fits perfectly with no scaling or borders. That’s all there is to it.  

Measurable

One of the requirements was for us to be able to measure the effectiveness of this effort through Flurry.  In order to do that, we need to be able to communicate from the HTML Webview into the Objective-C code.  
Alexandre Poirot has a nice article on How to Properly Call ObjectiveC From Javascript.  I didn’t need to use his entire framework, but I used the basic concept to allow Hype’s Javascript to be intercepted by the WebView’s delegate.
In Hype, I setup a Javascript function to post to a special URL that can be intercepted by the shouldStartLoadWithRequest UIWebView delegate.  

I used a custom URL scheme called “howtoplayscene” and passed Hype’s current scene name so I can tell what scene the user is viewing.  By doing this in an iframe that we create and then destroy it so the user doesn’t see anything.  Plus, we can use the same HTML for the embedded WebView as well as a regular browser.  Although, there won’t be any tracking when running in a regular browser.
Once the javascript is in place, each scene can then be configured to call the trackHowToPlayCreentScene function when it’s loaded:

That’s all there is to it on the Javascript side.   The Objective-C side is fairly straightforward:

With this in place, we will get custom events in Fluury that look something like:

  • HowToPlay.Scene.Goal
  • HowToPlay.Scene.Tool Bar
  • Show HowToPlay.Scene.Step 1
  • Show HowToPlay.Scene.Step 2
We will be able to track how many users make it through all they steps and which percentage of users drop off at any given point in time.  Hopefully, with this information we can help make the tutorial better and find where and if users are getting stuck.  We will also be able to correlate information such as the percentage of people that complete the tutorial that go on to purchase.

The other little trick I use, as seen above, is when the user complets the How To Play tutorial.  I have the HTML navigate to “www.fivelakesstudio.com/Five_Lakes_Studio/PicrossHD.html“.  I picked that URL so when the user hits the done button they will be taken to the webpage for PicrossHD, when run from an external browser.  However, when run from within the App, we close the UIWebView.

Conclusion

This will go live soon, and we are excited to get this in the hands of our new users. I hope we can teach more people how to Play Picross HD and hopefully they will like it.

Please feel free to follow me on twitter at @fivelakesstudio. I would love to hear about your experiences on how to onboard people to your app. Let me know if you found this useful, and especially if you now understand how to play Picross HD.

Thanks for reading and be sure to visit us at Five Lakes Studio. I should mention that I also work for Techsmith, the makers of Snagit and Camtasia.

WP Like Button Plugin by Free WordPress Templates