È ansible utilizzare AVCaptureVideoDataOutput e AVCaptureMovieFileOutput contemporaneamente?

Voglio registrare video e catturare fotogrammi allo stesso tempo con il mio codice.

Sto usando AVCaptureVideoDataOutput per i frame grab e AVCaptureMovieFileOutput per la registrazione video. Ma non può funzionare e ottenere il codice di errore -12780 mentre si lavora allo stesso tempo ma individualmente.

Ho cercato questo problema ma non ho ricevuto risposta. Qualcuno ha la stessa esperienza o spiega? Mi dà molto fastidio per un po ‘di tempo.

Grazie.

Non posso rispondere alla domanda specifica, ma sono riuscito a registrare video e ad afferrare fotogrammi allo stesso tempo usando:

  • AVCaptureSession e AVCaptureVideoDataOutput per instradare i frame nel mio codice
  • AVAssetWriter , AVAssetWriterInput e AVAssetWriterInputPixelBufferAdaptor per scrivere frame su un filmato con codifica H.264

Questo è senza investigare l’audio. CMSampleBuffers fine CMSampleBuffers dalla sessione di acquisizione e poi li CMSampleBuffers nell’adattatore del buffer dei pixel.

EDIT: così il mio codice sembra più o meno come, con i bit che non hai problemi con skimmed over e ignorando i problemi di scope:

 /* to ensure I'm given incoming CMSampleBuffers */ AVCaptureSession *captureSession = alloc and init, set your preferred preset/etc; AVCaptureDevice *captureDevice = default for video, probably; AVCaptureDeviceInput *deviceInput = input with device as above, and attach it to the session; AVCaptureVideoDataOutput *output = output for 32BGRA pixel format, with me as the delegate and a suitable dispatch queue affixed. /* to prepare for output; I'll output 640x480 in H.264, via an asset writer */ NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:640], AVVideoWidthKey, [NSNumber numberWithInt:480], AVVideoHeightKey, AVVideoCodecH264, AVVideoCodecKey, nil]; AVAssetWriterInput *assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; /* I'm going to push pixel buffers to it, so will need a AVAssetWriterPixelBufferAdaptor, to expect the same 32BGRA input as I've asked the AVCaptureVideDataOutput to supply */ AVAssetWriterInputPixelBufferAdaptor *pixelBufferAdaptor = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes: [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil]]; /* that's going to go somewhere, I imagine you've got the URL for that sorted, so create a suitable asset writer; we'll put our H.264 within the normal MPEG4 container */ AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:URLFromSomwhere fileType:AVFileTypeMPEG4 error:you need to check error conditions, this example is too lazy]; [assetWriter addInput:assetWriterInput]; /* we need to warn the input to expect real time data incoming, so that it tries to avoid being unavailable at inopportune moments */ assetWriterInput.expectsMediaDataInRealTime = YES; ... eventually ... [assetWriter startWriting]; [assetWriter startSessionAtSourceTime:kCMTimeZero]; [captureSession startRunning]; ... elsewhere ... - (void) captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // a very dense way to keep track of the time at which this frame // occurs relative to the output stream, but it's just an example! static int64_t frameNumber = 0; if(assetWriterInput.readyForMoreMediaData) [pixelBufferAdaptor appendPixelBuffer:imageBuffer withPresentationTime:CMTimeMake(frameNumber, 25)]; frameNumber++; } ... and, to stop, ensuring the output file is finished properly ... [captureSession stopRunning]; [assetWriter finishWriting]; 

Questa è una versione veloce della risposta di Tommy.

  // Set up the Capture Session // Add the Inputs // Add the Outputs var outputSettings = [ AVVideoWidthKey : Int(640), AVVideoHeightKey : Int(480), AVVideoCodecKey : .h264 ] var assetWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo,outputSettings: outputSettings) var pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput, sourcePixelBufferAttributes: [ kCVPixelBufferPixelFormatTypeKey : Int(kCVPixelFormatType_32BGRA)]) var assetWriter = AVAssetWriter(url: URLFromSomwhere, fileType: AVFileTypeMPEG4 , error : Error ) assetWriter.addInput(assetWriterInput) assetWriterInput.expectsMediaDataInRealTime = true assetWriter.startWriting() assetWriter.startSession(atSourceTime: kCMTimeZero) captureSession.startRunning() func captureOutput(_ captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { var imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) // a very dense way to keep track of the time at which this frame // occurs relative to the output stream, but it's just an example! var frameNumber: Int64 = 0 if assetWriterInput.readyForMoreMediaData { pixelBufferAdaptor.appendPixelBuffer(imageBuffer, withPresentationTime: CMTimeMake(frameNumber, 25)) } frameNumber += 1 } captureSession.stopRunning() assetWriter.finishWriting() 

Però non garantisco una precisione del 100%, perché sono nuovo di swift.